| { |
| "paper_id": "E95-1016", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T10:31:41.868990Z" |
| }, |
| "title": "On Learning more Appropriate Selectional Restrictions", |
| "authors": [ |
| { |
| "first": "Francesc", |
| "middle": [], |
| "last": "Ribas", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present some variations affecting the association measure and thresholding on a technique for learning Selectional Restrictions from on-line corpora. It uses a wide-coverage noun taxonomy and a statistical measure to generalize the appropriate semantic classes. Evaluation measures for the Selectional Restrictions learning task are discussed. Finally, an experimental evaluation of these variations is reported.", |
| "pdf_parse": { |
| "paper_id": "E95-1016", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present some variations affecting the association measure and thresholding on a technique for learning Selectional Restrictions from on-line corpora. It uses a wide-coverage noun taxonomy and a statistical measure to generalize the appropriate semantic classes. Evaluation measures for the Selectional Restrictions learning task are discussed. Finally, an experimental evaluation of these variations is reported.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In recent years there has been a common agreement in the NLP research community on the importance of having an extensive coverage of selectional restrictions (SRs) tuned to the domain to work with. SRs can be seen as semantic type constraints that a word sense imposes on the words with which it combines in the process of semantic interpretation. SRs may have different applications in NLP, specifically, they may help a parser with Word Sense Selection (WSS, as in ), with preferring certain structures out of several grammatical ones and finally with deciding the semantic role played by a syntactic complement . Lexicography is also interested in the acquisition of SRs (both defining in context approach and lexical semantics work ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The aim of our work is to explore the feasibility of using an statistical method for extracting SRs from on-line corpora. developed a method for automatically extracting classbased SRs from on-line corpora. *This research has been made in the framework of the Acquilex-II Esprit Project (7315), and has been supported by a grant of Departament d'Ensenyament, Generalitat de Catalunya, 91-DOGC-1491.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "performed some experiments using this basic technique and drew up some limitations from the corresponding results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper we will describe some substantial modifications to the basic technique and will report the corresponding experimental evaluation. The outline of the paper is as follows: in section 2 we summarize the basic methodology used in , analyzing its limitations; in section 3 we explore some alternative statistical measures for ranking the hypothesized SRs; in section 4 we propose some evaluation measures on the SRs-learning problem, and use them to test the experimental results obtained by the different techniques; finally, in section 5 we draw up the final conclusions and establish future lines of research.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The technique functionality can be summarized as: Input The training set, i.e. a list of complement co-occurrence triples, (verblemma, syntactic-relationship, noun-lemma) extracted from the corpus.", |
| "cite_spans": [ |
| { |
| "start": 123, |
| "end": 170, |
| "text": "(verblemma, syntactic-relationship, noun-lemma)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Previous knowledge used A semantic hierarchy (WordNet 1) where words are clustered in semantic classes, and semantic classes are organized hierarchically. Polysemous words are represented as instances of different classes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Output A set of syntactic SRs, (verb-lemma, syntactic-relationship, semantic-class, weight) . The final SRs must be mutually disjoint. SRs are weighted according to the statistical evidence found in the corpus.", |
| "cite_spans": [ |
| { |
| "start": 31, |
| "end": 91, |
| "text": "(verb-lemma, syntactic-relationship, semantic-class, weight)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Learning process 3 stages:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "1. Creation of the space of candidate classes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "1WordNet is a broad-coverage lexieal database, see The appropriateness of a class for expressing SRs (stage 2) is quantified from tile strength of co-occurrence of verbs and classes of nouns in the. corpus . Given the verb v, the syntactic-relationship s and the candidate class c, the Association Score, Assoc, between v and c in s is defined: Probabilities are estimated by Maximum Likelihood Estimation (MLE), i.e. counting the relative frequency of the considered events in the corpuQ. However, it is not obvious how to calculate class frequencies when the training corpus is not semantically tagged as is the case. Nevertheless, we take a simplistic approach and calculate them in the following manner:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Assoc(v,s,c) = p(clv, s)I(v;cls ) = p(clv, s)log p( lv, s. _____) )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "freq(v,s,c) = ~freq(v,s,n) \u00d7 w (1) rtEc", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "2The utility of introducing smoothing techniques on class-based distributions is dubious, see ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Where w is a constant factor used to normalize the probabilities 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "W .~ ~vEV ~sqS ~nqAf freq( v, S, n)lsenses(n)l", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Description", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "When creating the space of candidate classes (learning process, stage 1), we use a threshold.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "ing technique to ignore as much as possible the noise introduced in the training set. Specifically, we consider only those classes that have a higher number of occurrences than the threshold. The selection of the most appropriate classes (stage 3) is based on a global search through the candidates, in such a way that the final classes are mutually disjoint (not related by hyperonymy).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Ribas (1994a) reported experimental results obtained from the application of the above technique to learn SRs. He performed an evaluation of the SRs obtained from a training set of 870,000 words of the Wall Street Journal. In this section we summarize the results and conclusions reached in that paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "For instance, used equation 1 without introducing normalization. Therefore, the estimated function didn't accomplish probability axioms. Nevertheless, their results should be equivalent (for our purposes) to those introducing normalization because it shouldn't affect the relative ordering of Assoc among rival candidate classes for the same (v, s). suit_of_clothes >, < sugt >) , but the Assoc score ranked them higher than the appropriate sense.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We can also notice that the l~Abs class, < group >, seems too general for the example nouns, while one of its daughters, < people > seems to fit the data much better.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Analyzing the results obtained from different experimental evaluation methods, Although the performance of the presented technique seems to be quite good, we think that some of the detected flaws could possibly be addressed. Noise due to polysemy of the nouns involved seems to be the main obstacle for the practicality of the technique. It makes the association score prefer incorrect classes and jump on overgeneralizations. In this paper we are interested in exploring various ways to make the technique more robust to noise, namely, (a) to experiment with variations of the association score, (b) to experiment with thresholding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Variations on the association statistical measure", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section we consider different variations on the association score in order to make it more robust. The different techniques are experimentally evaluated in section 4.2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3", |
| "sec_num": null |
| }, |
| { |
| "text": "When considering the prior probability, the more independent of the context it is the better to measure actual associations. A sensible modification of the measure would be to consider p(c) as the prior distribution: (;'(;; s) Using the chain rule on mutual information (Cover and Thomas, 1991, p. 22) we can mathematically relate the different versions of Assoc, mssoc'(v, s, c) ", |
| "cite_spans": [ |
| { |
| "start": 217, |
| "end": 226, |
| "text": "(;'(;; s)", |
| "ref_id": null |
| }, |
| { |
| "start": 270, |
| "end": 301, |
| "text": "(Cover and Thomas, 1991, p. 22)", |
| "ref_id": null |
| }, |
| { |
| "start": 357, |
| "end": 379, |
| "text": "Assoc, mssoc'(v, s, c)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Variations on the prior probability", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Assoc'(v,s,c) = p(c,v,s) logP", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Variations on the prior probability", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "= p(clv, s)log ~+Assoc(v, s, c)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Variations on the prior probability", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The first advantage of Assoc' would come from this (information theoretical) relationship. Specifically, the AssoF takes into account the preference (selection) of syntactic positions for particular classes. In intuitive terms, typical subjects (e.g. <person, individual, ...>) would be preferred (to atypical subjects as <suit_of_clothes>) as SRs on the subject in contrast to Assoc. The second advantage is that as long as the prior probabilities, p(c), involve simpler events than those used in Assoc, p(cls), the estimation is easier and more accurate (ameliorating data sparseness).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Variations on the prior probability", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "A subsequent modification would be to estimate the prior, p(c), from the counts of all the nouns appearing in the corpus independently of their syntactic positions (not restricted to be heads of verbal complements). In this way, the estimation of p(c) would be easier and more accurate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Variations on the prior probability", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In the global weighting technique presented in equation 2 very polysemous nouns provide the same amount of evidence to every sense as nonambiguous nouns do -while less ambiguous nouns could be more informative about the correct classes as long as they do not carry ambiguity. The weight introduced in (1) could alternatively be found in a local manner, in such a way that more polysemous nouns would give less evidence to each one of their senses than less ambiguous ones. Local weight could be obtained using p(cJn).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating class probabilities from noun frequencies", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Nevertheless, a good estimation of this probability seems quite problematic because of the lack of tagged training material. In absence of a better estimator we use a rather poor one as the uniform distribution, c) = (cln) = e el Is ,,ses(,,)l also uses a local normalization technique but he normalizes by the total number of classes in the hierarchy. This scheme seems to present two problematic features (see (Ribas, 1994b) for more details). First, it doesn't take dependency relationships introduced by hyperonymy into account. Second, nouns categorized in lower levels in the taxonomy provide less weight to each class than higher nouns.", |
| "cite_spans": [ |
| { |
| "start": 412, |
| "end": 426, |
| "text": "(Ribas, 1994b)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating class probabilities from noun frequencies", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In this section we propose the application of other measures apart from Assoc for learning SRs: loglikelihood ratio , relative entropy , mutual information ratio , \u00a22 . In section (4) their experimental evaluation is presented. The statistical measures used to detect associations on the distribution defined by two random variables X and Y work by measuring the deviation of the conditional distribution, P(XJY), from the expected distribution if both variables were considered independent, i.e. the marginal distribution, P(X). If P(X) is a good approximation and mutual information ratio consider only the deviation of the conditional probability p(c [v,s) from the corresponding marginal, p(c).", |
| "cite_spans": [ |
| { |
| "start": 654, |
| "end": 659, |
| "text": "[v,s)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Other statistical measures to score SRs", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\"~C p(-~clv~) p(-,cl-,v-s) p(-~c)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Other statistical measures to score SRs", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "On the other hand, log-likelihood ratio and \u00a22 measure the association between v_s and c considering the deviation of the four conditional cells in table 2 from the corresponding marginals. It is plausible that the deviation of the cells not taken into account by Assoc can help on extracting useful Sits.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Other statistical measures to score SRs", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Finally, it would be interesting to only use the information related to the selectional behavior of v_s, i.e. comparing the conditional probabilities of c and -~c given v_s with the corresponding marginals. Relative entropy, D(P(XIv_s)IIP(X)) , could do this job.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Other statistical measures to score SRs", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Evaluation on NLP has been crucial to fostering research in particular areas. Evaluation of the SR learning task would provide grounds to compare different techniques that try to abstract SRs from corpus using WordNet (e.g, section 4.2). It would also permit measuring the utility of the SRs obtained using WordNet in comparison with other frameworks using other kinds of knowledge. Finally it would be a powerful tool for detecting flaws of a particular technique (e.g, analysis).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation methods of SRs", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "However, a related and crucial issue is which linguistic tasks are used as a reference. SRs are useful for both lexicography and NLP. On the one hand, from the point of view of lexicography, the goal of evaluation would be to measure the quality of the SRs induced, (i.e., how well the resulting classes correspond to the nouns as they were used in the corpus). On the other hand, from the point of view of NLP, StLs should be evaluated on their utility (i.e., how much they help on performing the reference task).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation methods of SRs", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "As far as lexicography (quality) is concerned, we think the main criteria SRs acquired from corpora should meet are: (a) correct categorization -inferred classes should correspond to the correct senses of the words that are being generalized-, (b) appropriate generalization level and (c) good coverage -the majority of the noun occurrences in the corpus should be successfully generalized by the induced SRs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicography-oriented evaluation", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "Some of the methods we could use for assessing experimentally the accomplishment of these criteria would be:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicography-oriented evaluation", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "\u2022 Introspection A lexicographer checks if the SRs accomplish the criteria (a) and (b) above (e.g., the manual diagnosis in table 1). Besides the intrinsic difficulties of this approach, it does not seem appropriate when comparing across different techniques for learning SRs, because of its qualitative flavor.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicography-oriented evaluation", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "\u2022 Quantification of generalization level appropriateness A possible measure would be the percentage of sense occurrences included in the induced SRs which are effectively correct (from now on called Abstraction Ratio). Hopefully, a technique with a higher abstraction ratio learns classes that fit the set of examples better. A manual assessment of the ratio confirmed this behavior, as testing sets With a lower ratio seemed to be inducing less ~Abs cases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicography-oriented evaluation", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "\u2022 Quantification of coverage It could be measured as the proportion of triples whose correct sense belongs to one of the SRs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicography-oriented evaluation", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "The NLP tasks where SRs utility could be evaluated are diverse. Some of them have already been introduced in section 1. In the recent literature ( , , ...) several task oriented schemes to test Selectional Restrictions (mainly on syntactic ambiguity resolution) have been proposed. However, we have tested SRs on a WSS task, using the following scheme. For every triple in the testing set the algorithm selects as most appropriate that noun-sense that has as hyperonym the SR class with highest association score. When more than one sense belongs to the highest SR, a random selection is performed. When no SR has been acquired, the algorithm remains undecided. The results of this WSS procedure are checked against a testing-sample manually analyzed, and precision and recall ratios are calculated. Precision is calculated as the ratio of manual-automatic matches / number of noun occurrences disambiguated by the procedure. Recall is computed as the ratio of manual-automatic matches / total number of noun occurrences. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NLP evaluation tasks", |
| "sec_num": "4.1.2" |
| }, |
| { |
| "text": "In order to evaluate the different variants on the association score and the impact of thresholding we performed several experiments. In this section we analyze the results. As training set we used the 870,000 words of WSJ material provided in the ACL/DCI version of the Penn Treebank. The testing set consisted of 2,658 triples corresponding to four average common verbs in the Treebank: rise, report, seek and present. We only considered those triples that had been correctly extracted from the Treebank and whose noun had the correct sense included in WordNet (2,165 triples out of the 2,658, from now on, called the testingsample).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "As evaluation measures we used coverage, abstraction ratio, and recall and precision ratios on the WSS task (section 4.1). In addition we performed some evaluation by hand comparing the SRs acquired by the different techniques.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Comparing different techniques Coverage for the different techniques is shown in table 3. The higher the coverage, the better the technique succeeds in correctly generalizing more of the input examples. The labels used for referring to the different techniques are as follows: \"Assoc & p(cls)\" corresponds to the basic association measure (section 2), \"Assoc & Head-nouns\" and \"Assoc & All nouns\" to the techniques introduced in section 3.1, \"Assoe & Normalizing\" to the local normalization (section 3.2), and finally, log-likelihood, D (relative entropy) and I (mutual information ratio) to the techniques discussed in section 3.3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "The abstraction ratio for the different techniques is shown in table 4. In principle, the higher abstraction ratio, the better the technique succeeds in filtering out incorrect senses (less tAbs).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "The precision and recall ratios on the noun WSS task for the different techniques are shown in table 5. In principle, the higher the precision and recall ratios the better the technique succeeds in inducing appropriate SRs for the disambiguation task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "As far as the evaluation measures try to account for different phenomena the goodness of a particular technique should be quantified as a trade-off. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "78.5 77.9 76.7 74.4 74.1 73.3 63 45.7 62.7 Table 5 : Precision and Recall on the WSS task Most of the results are very similar (differences are not statistically significative). Therefore we should be cautious when extrapolating the results. Some of the conclusions from the tables above are:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 43, |
| "end": 50, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Rec. (%)", |
| "sec_num": null |
| }, |
| { |
| "text": "1. 4) 2 and I get sensibly worse results than other measures (although abstraction is quite good).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rec. (%)", |
| "sec_num": null |
| }, |
| { |
| "text": "2. The local normalizing technique using the uniform distribution does not help. It seems that by using the local weighting we misinform the algorithm. The problem is the reduced weight, that polysemous nouns get, while they seem to be the most informative 4. However, a better informed kind of local weight (section 5) should improve the technique significantly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rec. (%)", |
| "sec_num": null |
| }, |
| { |
| "text": "3. All versions of Assoc (except the local normalization) get good results. Specially the two techniques that exploit a simpler prior distribution, which seem to improve the basic technique.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rec. (%)", |
| "sec_num": null |
| }, |
| { |
| "text": "4. log-likelihood and D seem to get slightly worse results than Assoc techniques, although the results are very similar.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rec. (%)", |
| "sec_num": null |
| }, |
| { |
| "text": "We were also interested in measuring the impact of thresholding on the SRs acquired. In figure 1 we can see the different evaluation measures of the basic technique when varying the threshold. Precision and recall coincide when no candidate 4In some way, it conforms to Zipf-law : noun frequency and polysemy are correlated. it might be expected, as the threshold increases (i.e. some cases are not classified) the two ratios slightly diverge (precision increases and recall diminishes). Figure 1 also shows the impact of thresholding on coverage and abstraction ratios. Both decrease\" when threshold increases, probably because when the rejecting threshold is low, small classes that fit the data well can be induced, learning overgeneral or incomplete SRs otherwise.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 88, |
| "end": 96, |
| "text": "figure 1", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 488, |
| "end": 496, |
| "text": "Figure 1", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Thresholdlng", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "Finally, it seems that precision and abstraction ratios are in inverse co-relation (as precision grows, abstraction decreases). In terms of WSS, general classes may be performing better than classes that fit the data better. Nevertheless, this relationship should be further explored in future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Thresholdlng", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "Conclusions and future work", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper we have presented some variations affecting the association measure and thresholding on tile basic technique for learning SRs fi'om online corpora. We proposed some evaluation measures for the SRs learning task. Finally, experimental results on these variations were reported. We can conclude that some of these variations seem to improve the results obtained using the basic technique. However, although the technique still seems far from practical application to NLP tasks, it may be most useful for providing experimental insight to lexicographers. Future lines of research will mainly concentrate on improving the local normalization technique by solving the noun sense ambiguity. We have foreseen the application of the following techniques:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Simple techniques to decide the best sense c given the target noun n using estimates of the n-grams: P(e), f(eln), P(clv, s) and P (cJv, s,n) , obtained from supervised and un-supervised corpora. \u2022 Combining the different n-grams by means of smoothing techniques.", |
| "cite_spans": [ |
| { |
| "start": 133, |
| "end": 143, |
| "text": "(cJv, s,n)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Calculating P(elv ,s,n) combining P(nle ) and P(clv ,s), and applying the EM Algorithm to improve the model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Using the WordNet hierarchy as a source of backing-off knowledge, in such a way that if n-grams composed by c aren't enough to decide the best sense (are equal to zero), the tri-grams of ancestor classes could be used instead. it might be expected, as the threshold increases (i.e. some cases are not classified) the two ratios slightly diverge (precision increases and recall diminishes). Figure 1 also shows the impact of thresholding on coverage and abstraction ratios. Both decrease\" when threshold increases, probably because when the rejecting threshold is low, small classes that fit the data well can be induced, learning overgeneral or incomplete SRs otherwise.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 392, |
| "end": 400, |
| "text": "Figure 1", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "5", |
| "sec_num": null |
| }, |
| { |
| "text": "Finally, it seems that precision and abstraction ratios are in inverse co-relation (as precision grows, abstraction decreases). In terms of WSS, general classes may be performing better than classes that fit the data better. Nevertheless, this relationship should be further explored in future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5", |
| "sec_num": null |
| }, |
| { |
| "text": "Conclusions and future work", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper we have presented some variations affecting the association measure and thresholding on the basic technique for learning SRs from online corpora. We proposed some evaluation measures for the SRs learning task. Finally, experimental results on these variations were reported. We can conclude that some of these variations seem to improve the results obtained using the basic technique. However, although the technique still seems far from practical application to NLP tasks, it may be most useful for providing experimental insight to lexicographers. Future lines of research will mainly concentrate on improving the local normalization teclmique by solving the noun sense ambiguity. We have foreseen the application of the following techniques:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Simple techniques to decide the best sense c given the target noun n using estimates of the n-grams: P(e), P(e[n), P(e[v,s) and P(c[v, s,n) , obtained from supervised and un-supervised corpora. \u2022 Combining the different n-grams by means of smoothing techniques.", |
| "cite_spans": [ |
| { |
| "start": 117, |
| "end": 125, |
| "text": "P(e[v,s)", |
| "ref_id": null |
| }, |
| { |
| "start": 130, |
| "end": 141, |
| "text": "P(c[v, s,n)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Calculating P(clv, s,n ) combining P (nlc) and P(e[v,s) , and applying the EM Algorithm to improve the model.", |
| "cite_spans": [ |
| { |
| "start": 39, |
| "end": 44, |
| "text": "(nlc)", |
| "ref_id": null |
| }, |
| { |
| "start": 49, |
| "end": 57, |
| "text": "P(e[v,s)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Using the WordNet hierarchy as a source of backing-off knowledge, in such a way that if n-grams composed by c aren't enough to decide the best sense (are equal to zero), the tri-grams of ancestor classes could be used instead.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Computational lexicons: the neat examples and the odd exemplars", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Basili", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "T" |
| ], |
| "last": "Pazienza", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Velardi", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Procs 3rd ANLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Basili, M.T. Pazienza, and P. Velardi. 1992. Computational lexicons: the neat examples and the odd exemplars. In Procs 3rd ANLP, Trento, Italy, April.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Word association norms, mutual information and lexicography", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [ |
| "W" |
| ], |
| "last": "Church", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Hanks", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Computational Linguistics", |
| "volume": "", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K.W. Church and P. Hanks. 1990. Word associa- tion norms, mutual information and lexicogra- phy. Computational Linguistics, 16(1).", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Elements of Iuformation Theory", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Cover", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "A" |
| ], |
| "last": "Thomas", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Cover and J.A. Thomas, editors. 1991. El- ements of Iuformation Theory. John Wiley.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Maximum likelihood from incomplete data via the em algorithm", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "P" |
| ], |
| "last": "Dempster", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "M" |
| ], |
| "last": "Laird", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "B" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "Journal of the Royal Statistical Society", |
| "volume": "39", |
| "issue": "B", |
| "pages": "1--38", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. P. Dempster, N. M. Laird, and D. B. Ru- bin. 1977. Maximum likelihood from incom- plete data via the em algorithm. Journal of the Royal Statistical Society, 39(B):1-38.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Accurate methods for the statistics of surprise and coincidence", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Dunning", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "1", |
| "pages": "61--74", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Compu- tational Linguistics, 19(1):61-74.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Identifying word correspondences in parallel texts", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Gale", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "W" |
| ], |
| "last": "Church", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "DARPA Speech and Natural Language Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "W. Gale and K. W. Church. 1991. Identify- ing word correspondences in parallel texts. In DARPA Speech and Natural Language Work- shop, Pacific Grove, California, February.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Acquisition of selectional patterns", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Sterling", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Grishman and J. Sterling. 1992. Acquisition of selectional patterns. In COLING, Nantes, France, march.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Semantic interpretation and the resolution of ambiguity", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Hirst", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Hirst. 1987. Semantic interpretation and the resolution of ambiguity. Cambridge University Press.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Towards a lexical organization of English verbs", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Levin", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Levin. 1992. Towards a lexical organization of English verbs. University of Chicago Press.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Five papers on wordnet", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Beckwith", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Gross", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "International Journal of Lexicography", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K. Miller. 1991. Five papers on wordnet. International Journal of Lexicography.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Wordnet and distributional analysis: A class-based approach to lexieal discovery", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "S" |
| ], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "AAAI Symposium on Probabilistic Approaches to NL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. S. Resnik. 1992. Wordnet and distributional analysis: A class-based approach to lexieal dis- covery. In AAAI Symposium on Probabilistic Approaches to NL, San Jose, CA.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Selection and Information: A Class-Based Approach to lexical relationships", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "S" |
| ], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. S. Resnik. 1993. Selection and Information: A Class-Based Approach to lexical relationships.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "An experiment on learning appropriate selectional restrictions from a parsed corpus", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Ribas", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Ribas. 1994a. An experiment on learning ap- propriate selectional restrictions from a parsed corpus. In COLING, Kyoto, Japan, August.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Empirical study of predictive powers of simple attachment schemes for post-modifier prepositional phrases", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Ribas", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Acquilex-Ii", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Wp", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Whittemore", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ferrara", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Brunner", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Procs. ACL, Pennsylvania", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Ribas. 1994b. Learning more appropriate selectional restrictions. Technical report, ES- PRIT BRA-7315 ACQUILEX-II WP, G. Whittemore, K. Ferrara, and H. Brunner. 1990. Empirical study of predictive powers of simple attachment schemes for post-modifier prepositional phrases. In Procs. ACL, Pennsyl- vania.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The meaning-frequency relationship of words", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [ |
| "K" |
| ], |
| "last": "Zipf", |
| "suffix": "" |
| } |
| ], |
| "year": 1945, |
| "venue": "The Journal of General Psychology", |
| "volume": "33", |
| "issue": "", |
| "pages": "251--256", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. K. Zipf. 1945. The meaning-frequency rela- tionship of words. The Journal of General Psy- chology, 33:251-256.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Aequilex-II Working Papers can be obtained by sending a request to cide~cup.cam.uk) References", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "(Aequilex-II Working Papers can be obtained by sending a request to cide~cup.cam.uk) References", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Computational lexicons: the neat examples and the odd exemplars", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Basili", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "T" |
| ], |
| "last": "Pazienza", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Velardi", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Procs 3rd ANLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Basili, M.T. Pazienza, and P. Velardi. 1992. Computational lexicons: the neat examples and the odd exemplars. In Procs 3rd ANLP, Trento, Italy, April.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Word association norms, mutual information and lexicography", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [ |
| "W" |
| ], |
| "last": "Church", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Hanks", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Computational Linguistics", |
| "volume": "", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K.W. Church and P. Hanks. 1990. Word associa- tion norms, mutual information and lexicogra- phy. Computational Linguistics, 16(1).", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Elements of Information Theory", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Cover", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "A" |
| ], |
| "last": "Thomas", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Cover and J.A. Thomas, editors. 1991. El- ements of Information Theory. John Wiley.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Maximum likelihood from incomplete data via the em algorithm", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "P" |
| ], |
| "last": "Dempster", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "M" |
| ], |
| "last": "Laird", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "B" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "Journal of the Royal Statistical Society", |
| "volume": "39", |
| "issue": "B", |
| "pages": "1--38", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. P. Dempster, N. M. Laird, and D. B. Ru- bin. 1977. Maximum likelihood from incom- plete data via the em algorithm. Journal of the Royal Statistical Society, 39(B):1-38.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Accurate methods for the statistics of surprise and coincidence", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Dunning", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "1", |
| "pages": "61--74", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Compu- tational Linguistics, 19(1):61-74.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Identifying word correspondences in parallel texts", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Gale", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "W" |
| ], |
| "last": "Church", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "DARPA Speech and Natural Language Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "W. Gale and K. W. Church. 1991. Identify- ing word correspondences in parallel texts. In DARPA Speech and Natural Language Work- shop, Pacific Grove, California, February.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Acquisition of selectional patterns", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Sterling", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Grishman and J. Sterling. 1992. Acquisition of selectional patterns. In COLING, Nantes, France, march.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Semantic interpretation and the resolution of ambiguity", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Hirst", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Hirst. 1987. Semantic interpretation and the resolution of ambiguity. Cambridge University Press.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Towards a lexical organization of English verbs", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Levin", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Levin. 1992. Towards a lexical organization of English verbs. University of Chicago Press.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Five papers on wordnet", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Beckwith", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Gross", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "International Journal of Lexicography", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K. Miller. 1991. Five papers on wordnet. International Journal of Lexicography.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Wordnet and distributional analysis: A class-based approach to lexical discovery", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "S" |
| ], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "AAAI Symposium on Probabilistic Approaches to NL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. S. Resnik. 1992. Wordnet and distributional analysis: A class-based approach to lexical dis- covery. In AAAI Symposium on Probabilistic Approaches to NL, San Jose, CA.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Selection and Information: A Class-Based Approach to lexical relationships", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "S" |
| ], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. S. Resnik. 1993. Selection and Information: A Class-Based Approach to lexical relationships.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "An experiment on learning appropriate selectional restrictions from a parsed corpus", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Ribas", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Ribas. 1994a. An experiment on learning ap- propriate selectional restrictions from a parsed corpus. In COLING, Kyoto, Japan, August.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Learning more appropriate selectional restrictions", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Ribas", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Ribas. 1994b. Learning more appropriate selectional restrictions. Technical report, ES- PRIT BRA-7315 ACQUILEX-II WP.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Empirical study of predictive powers of simple attachment schemes for post-modifier prepositional phrases", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Whittemore", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Ferrara", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Brunner", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Proes. ACL, Pennsylvania", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Whittemore, K. Ferrara, and H. Brunner. 1990. Empirical study of predictive powers of simple attachment schemes for post-modifier prepositional phrases. In Proes. ACL, Pennsyl- vania.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "The meaning-frequency relationship of words", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [ |
| "K" |
| ], |
| "last": "Zipf", |
| "suffix": "" |
| } |
| ], |
| "year": 1945, |
| "venue": "Acquilex-II Working Papers can be obtained by sending a request to cide\u00a2cup, caaa.uk)", |
| "volume": "33", |
| "issue": "", |
| "pages": "251--256", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. K. Zipf. 1945. The meaning-frequency rela- tionship of words. The Journal of General Psy- chology, 33:251-256. (Acquilex-II Working Papers can be obtained by sending a request to cide\u00a2cup, caaa.uk)", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "p(cls) The two terms of Assoc try to capture different properties: 1. Mutual information ratio, l(v; cls), measures the strength of the statistical association between the given verb v and the candidate class c in the given syntactic position s. It compares the prior distribution, p(cls), with the posterior distribution, p(clv, s). 2. p(elv, s) scales up the strength of the association by the frequency of the relationship.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "text": "drew up some conclusions: a. The technique achieves a good coverage. b. Most of the classes acquired result from the accumulation of incorrect senses. c. No clear co-relation between Assoc and the manual diagnosis is found. d. A slight tendency to over-generalization exists due to incorrect senses.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "num": null, |
| "text": "Assoc: Evaluation ratios vs. Threshold classes are refused (threshold = 1). However, as", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "num": null, |
| "text": "Assoc: Evaluation ratios vs. Threshold classes are refused (threshold = 1). However, as", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "num": null, |
| "html": null, |
| "text": "SRs acquired for the subject of seek", |
| "content": "<table><tr><td>2. Evaluation of the appropriateness of the</td></tr><tr><td>candidates by means of a statistical mea-</td></tr><tr><td>sure.</td></tr><tr><td>3. Selection of the most appropriate subset</td></tr><tr><td>in the candidate space to convey the SRs.</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "num": null, |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td colspan=\"2\">shows the SRs acquired</td></tr><tr><td colspan=\"2\">for the subject position of the verb seek. Type indi-</td></tr><tr><td colspan=\"2\">cates a manual diagnosis about the class appropri-</td></tr><tr><td colspan=\"2\">ateness (Ok: correct; ~Abs: over-generalization;</td></tr><tr><td>Senses: due to erroneous senses).</td><td>Assoc cor-</td></tr><tr><td colspan=\"2\">responds to the association score (higher values</td></tr><tr><td colspan=\"2\">appear first). Most of the induced classes are</td></tr><tr><td colspan=\"2\">due to incorrect senses. Thus, although suit was</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "num": null, |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td>: Conditional and marginal distributions</td></tr><tr><td>of P(XIY), association measures should be low</td></tr><tr><td>(near zero), otherwise deviating significantly from</td></tr><tr><td>zero.</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "num": null, |
| "html": null, |
| "text": "shows the cross-table formed by the conditional and marginal distributions in the case of X = {e, ~e} and Y = {v_s,-,v_s}. Different association measures use the information provided in the cross-table to different extents. Thus, Assoc", |
| "content": "<table/>", |
| "type_str": "table" |
| } |
| } |
| } |
| } |