{ "paper_id": "P02-1029", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:30:30.799816Z" }, "title": "Inducing German Semantic Verb Classes from Purely Syntactic Subcategorisation Information", "authors": [ { "first": "Sabine", "middle": [], "last": "Schulte", "suffix": "", "affiliation": {}, "email": "schulte@ims.uni-stuttgart.de" }, { "first": "Chris", "middle": [], "last": "Brew", "suffix": "", "affiliation": {}, "email": "cbrew@ling.ohio-state.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The paper describes the application of k-Means, a standard clustering technique, to the task of inducing semantic classes for German verbs. Using probability distributions over verb subcategorisation frames, we obtained an intuitively plausible clustering of 57 verbs into 14 classes. The automatic clustering was evaluated against independently motivated, handconstructed semantic verb classes. A series of post-hoc cluster analyses explored the influence of specific frames and frame groups on the coherence of the verb classes, and supported the tight connection between the syntactic behaviour of the verbs and their lexical meaning components.", "pdf_parse": { "paper_id": "P02-1029", "_pdf_hash": "", "abstract": [ { "text": "The paper describes the application of k-Means, a standard clustering technique, to the task of inducing semantic classes for German verbs. Using probability distributions over verb subcategorisation frames, we obtained an intuitively plausible clustering of 57 verbs into 14 classes. The automatic clustering was evaluated against independently motivated, handconstructed semantic verb classes. A series of post-hoc cluster analyses explored the influence of specific frames and frame groups on the coherence of the verb classes, and supported the tight connection between the syntactic behaviour of the verbs and their lexical meaning components.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A long-standing linguistic hypothesis asserts a tight connection between the meaning components of a verb and its syntactic behaviour: To a certain extent, the lexical meaning of a verb determines its behaviour, particularly with respect to the choice of its arguments. The theoretical foundation has been established in extensive work on semantic verb classes such as (Levin, 1993) for English and (V\u00e1zquez et al., 2000) for Spanish: each verb class contains verbs which are similar in their meaning and in their syntactic properties.", "cite_spans": [ { "start": 369, "end": 382, "text": "(Levin, 1993)", "ref_id": "BIBREF9" }, { "start": 399, "end": 421, "text": "(V\u00e1zquez et al., 2000)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "From a practical point of view, a verb classification supports Natural Language Processing tasks, since it provides a principled basis for filling gaps in available lexical knowledge. For example, the English verb classification has been used for applications such as machine translation (Dorr, 1997) , word sense disambiguation (Dorr and Jones, 1996) , and document classification (Klavans and Kan, 1998) . Various attempts have been made to infer conveniently observable morpho-syntactic and semantic properties for English verb classes (Dorr and Jones, 1996; Lapata, 1999; Stevenson and Merlo, 1999; Schulte im Walde, 2000; McCarthy, 2001 ).", "cite_spans": [ { "start": 288, "end": 300, "text": "(Dorr, 1997)", "ref_id": "BIBREF1" }, { "start": 329, "end": 351, "text": "(Dorr and Jones, 1996)", "ref_id": "BIBREF0" }, { "start": 382, "end": 405, "text": "(Klavans and Kan, 1998)", "ref_id": "BIBREF6" }, { "start": 539, "end": 561, "text": "(Dorr and Jones, 1996;", "ref_id": "BIBREF0" }, { "start": 562, "end": 575, "text": "Lapata, 1999;", "ref_id": "BIBREF7" }, { "start": 576, "end": 602, "text": "Stevenson and Merlo, 1999;", "ref_id": "BIBREF20" }, { "start": 603, "end": 626, "text": "Schulte im Walde, 2000;", "ref_id": "BIBREF16" }, { "start": 627, "end": 641, "text": "McCarthy, 2001", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To our knowledge this is the first work to obtain German verb classes automatically. We used a robust statistical parser (Schmid, 2000) to acquire purely syntactic subcategorisation information for verbs. The information was provided in form of probability distributions over verb frames for each verb. There were two conditions: the first with relatively coarse syntactic verb subcategorisation frames, the second a more delicate classification subdividing the verb frames of the first condition using prepositional phrase information (case plus preposition). In both conditions verbs were clustered using k-Means, an iterative, unsupervised, hard clustering method with well-known properties, cf. (Kaufman and Rousseeuw, 1990) . The goal of a series of cluster analyses was (i) to find good values for the parameters of the clustering process, and (ii) to explore the role of the syntactic frame descriptions in verb classification, to demonstrate the implicit induction of lexical meaning components from syntactic properties, and to suggest ways in which the syntactic information might further be refined. Our long term goal is to support the development of high-quality and large-scale lexical resources.", "cite_spans": [ { "start": 121, "end": 135, "text": "(Schmid, 2000)", "ref_id": "BIBREF14" }, { "start": 699, "end": 728, "text": "(Kaufman and Rousseeuw, 1990)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The syntactic subcategorisation frames for German verbs were obtained by unsupervised learning in a statistical grammar framework (Schulte im Walde et al., 2001 ): a German context-free grammar containing frame-predicting grammar rules and information about lexical heads was trained on 25 million words of a large German newspaper corpus. The lexicalised version of the probabilistic grammar served as source for syntactic descriptors for verb frames (Schulte im Walde, 2002b) .", "cite_spans": [ { "start": 142, "end": 160, "text": "Walde et al., 2001", "ref_id": "BIBREF15" }, { "start": 464, "end": 477, "text": "Walde, 2002b)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Descriptors for Verb Frames", "sec_num": "2" }, { "text": "The verb frame types contain at most three arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Descriptors for Verb Frames", "sec_num": "2" }, { "text": "Possible arguments in the frames are nominative (n), dative (d) and accusative (a) noun phrases, reflexive pronouns (r), prepositional phrases (p), expletive es (x), non-finite clauses (i), finite clauses (s-2 for verb second clauses, s-dass for dass-clauses, s-ob for ob-clauses, s-w for indirect wh-questions), and copula constructions (k). For example, subcategorising a direct (accusative case) object and a non-finite clause would be represented by nai. We defined a total of 38 subcategorisation frame types, according to the verb subcategorisation potential in the German grammar (Helbig and Buscha, 1998) , with few further restrictions on argument combination.", "cite_spans": [ { "start": 587, "end": 612, "text": "(Helbig and Buscha, 1998)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Descriptors for Verb Frames", "sec_num": "2" }, { "text": "We extracted verb-frame distributions from the trained lexicalised grammar. Table 1 shows an example distribution for the verb glauben 'to think/believe' (for probability values 1%). We also created a more delicate version of subcategorisation frames that discriminates between different kinds of pp-arguments. This was done by distributing the frequency mass of prepositional phrase frame types (np, nap, ndp, npr, xp) over the prepo-sitional phrases, according to their frequencies in the corpus. Prepositional phrases are referred to by case and preposition, such as 'Dat.mit', 'Akk.f\u00fcr'. The resulting lexical subcategorisation for reden and the frame type np whose total joint probability is 0.35820, is displayed in The subcategorisation frame descriptions were formally evaluated by comparing the automatically generated verb frames against manual definitions in the German dictionary Duden -Das Stilw\u00f6rterbuch (Dudenredaktion, 2001 ). The F-score was 65.30% with and 72.05% without prepositional phrase information: the automatically generated data is both easy to produce in large quantities and reliable enough to serve as proxy for human judgement (Schulte im Walde, 2002a).", "cite_spans": [ { "start": 396, "end": 419, "text": "(np, nap, ndp, npr, xp)", "ref_id": null }, { "start": 918, "end": 939, "text": "(Dudenredaktion, 2001", "ref_id": null } ], "ref_spans": [ { "start": 76, "end": 83, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Syntactic Descriptors for Verb Frames", "sec_num": "2" }, { "text": "Semantic verb classes have been defined for several languages, with dominant examples concerning English (Levin, 1993) and Spanish (V\u00e1zquez et al., 2000) . The basic linguistic hypothesis underlying the construction of the semantic classes is that verbs in the same class share both meaning components and syntactic behaviour, since the meaning of a verb is supposed to influence its behaviour in the sentence, especially with regard to the choice of its arguments.", "cite_spans": [ { "start": 105, "end": 118, "text": "(Levin, 1993)", "ref_id": "BIBREF9" }, { "start": 131, "end": 153, "text": "(V\u00e1zquez et al., 2000)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "German Semantic Verb Classes", "sec_num": "3" }, { "text": "We hand-constructed a concise classification with 14 semantic verb classes for 57 German verbs before we initiated any clustering experiments. We have on hand a larger set of verbs and a more elaborate classification, but choose to work on the smaller set for the moment, since an important component of our research program is an informative post-hoc analysis which becomes infeasible with larger datasets. The semantic aspects and majority of verbs are closely related to Levin's English classes. They are consistent with the German verb classification in (Schumacher, 1986) as far as the relevant verbs appear in his less extensive semantic 'fields'.", "cite_spans": [ { "start": 558, "end": 576, "text": "(Schumacher, 1986)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "German Semantic Verb Classes", "sec_num": "3" }, { "text": "1. Aspect: anfangen, aufh\u00f6ren, beenden, beginnen, enden 2. Propositional Attitude:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "German Semantic Verb Classes", "sec_num": "3" }, { "text": "ahnen, denken, glauben, vermuten, wissen 3. Transfer of Possession (Obtaining): bekommen, erhalten, erlangen, kriegen 4. Transfer of Possession (Supply): bringen, liefern, schicken, vermitteln, zustellen 5. Manner of Motion: fahren, fliegen, rudern, segeln 6. Emotion: \u00e4rgern, freuen 7. Announcement: ank\u00fcndigen, bekanntgeben, er\u00f6ffnen, verk\u00fcnden 8. Description: beschreiben, charakterisieren, darstellen, interpretieren 9. Insistence: beharren, bestehen, insistieren, pochen 10. Position: liegen, sitzen, stehen 11. Support: dienen, folgen, helfen, unterst\u00fctzen 12. Opening: \u00f6ffnen, schlie\u00dfen 13. Consumption: essen, konsumieren, lesen, saufen, trinken 14. Weather: blitzen, donnern, d\u00e4mmern, nieseln, regnen, schneien", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "German Semantic Verb Classes", "sec_num": "3" }, { "text": "The class size is between 2 and 6, no verb appears in more than one class. For some verbs this is something of an oversimplification; for example, the verb bestehen is assigned to verbs of insistence, but it also has a salient sense more related to existence. Similarly, schlie\u00dfen is recorded under open/close, in spite of the fact it also has a meaning related to inference and the formation of conclusions. The classes include both high and low frequency verbs, because we wanted to make sure that our clustering technology was exercised in both data-rich and data-poor situations. The corpus frequencies range from 8 to 31,710.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "German Semantic Verb Classes", "sec_num": "3" }, { "text": "Our target classification is based on semantic intuitions, not on our knowledge of the syntactic behaviour. As an extreme example, the semantic class Support contains the verb unterst\u00fctzen, which syntactically requires a direct object, together with the three verbs dienen, folgen, helfen which dominantly subcategorise an indirect object. In what follows we will show that the semantic classification is largely recoverable from the patterns of verb-frame occurrence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "German Semantic Verb Classes", "sec_num": "3" }, { "text": "Clustering is a standard procedure in multivariate data analysis. It is designed to uncover an inherent natural structure of the data objects, and the equivalence classes induced by the clusters provide a means for generalising over these objects. In our case, clustering is realised on verbs: the data objects are represented by verbs, and the data features for describing the objects are realised by a probability distribution over syntactic verb frame descriptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Methodology", "sec_num": "4" }, { "text": "Clustering is applicable to a variety of areas in Natural Language Processing, e.g. by utilising class type descriptions such as in machine translation (Dorr, 1997) , word sense disambiguation (Dorr and Jones, 1996) , and document classification (Klavans and Kan, 1998), or by applying clusters for smoothing such as in machine translation , or probabilistic grammars .", "cite_spans": [ { "start": 152, "end": 164, "text": "(Dorr, 1997)", "ref_id": "BIBREF1" }, { "start": 193, "end": 215, "text": "(Dorr and Jones, 1996)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Clustering Methodology", "sec_num": "4" }, { "text": "We performed clustering by the k-Means algorithm as proposed by (Forgy, 1965) , which is an unsupervised hard clustering method assigning data objects to exactly \u00a1 clusters. Initial verb clusters are iteratively re-organised by assigning each verb to its closest cluster (centroid) and re-calculating cluster centroids until no further changes take place.", "cite_spans": [ { "start": 64, "end": 77, "text": "(Forgy, 1965)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Clustering Methodology", "sec_num": "4" }, { "text": "One parameter of the clustering process is the distance measure used. Standard choices include the cosine, Euclidean distance, Manhattan metric, and variants of the Kullback-Leibler (KL) divergence. We concentrated on two variants of KL in Equation (1): information radius, cf. Equation 2, and skew divergence, recently shown as an effective measure for distributional similarity (Lee, 2001), cf. Equation 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Methodology", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00a2 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a9 \u00a5 \u00a3 ! # \" $ \" & % ' ) ( 0 1 0 3 2 5 4 6 0 % 0 (1) \u00a2 7 \u00a3 \u00a6 \u00a5 \u00a7 \u00a9 \u00a5 8 9 \u00a3 ! 9 \" $ \" A @ B % C D @ B 9 \u00a3 5 % E \" $ \" A @ B % C (2) \u00a2 7 \u00a3 \u00a6 \u00a5 \u00a7 F \u00a9 \u00a5 9 \u00a3 ! 9 \" $ \" H G P I Q % R @ S \u00a3 U T W V # G X Y I ` 3", "eq_num": "(3)" } ], "section": "Clustering Methodology", "sec_num": "4" }, { "text": "Measures (2) and (3) can tolerate zero values in the probability distribution, because they work with a weighted average of the two distributions compared. For the skew-divergence, we set the weight G to 0.9, as was done by Lee.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Methodology", "sec_num": "4" }, { "text": "Furthermore, because the k-Means algorithm is sensitive to its starting clusters, we explored the option of initialising the cluster centres based on other clustering algorithms. We performed agglomerative hierarchical clustering on the verbs which first assigns each verb to its own cluster and then iteratively determines the two closest clusters and merges them, until the specified number of clusters is left. We tried several amalgamation methods: single-linkage, complete-linkage, average verb distance, distance between cluster centroids, and Ward's method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Methodology", "sec_num": "4" }, { "text": "The clustering was performed as follows: the 57 verbs were associated with probability distributions over frame types 1 (in condition 1 there were 38 frame types, while in the more delicate condition 2 there were 171, with a concomitant increase in data sparseness), and assigned to starting clusters (randomly or by hierarchical clustering). The k-Means algorithm was then allowed to run for as many iterations as it takes to reach a fixed point, and the resulting clusters were interpreted and evaluated against the manual classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Methodology", "sec_num": "4" }, { "text": "Related work on English verb classification or clustering utilised supervised learning by decision trees (Stevenson and Merlo, 1999) , or a method related to hierarchical clustering (Schulte im Walde, 2000).", "cite_spans": [ { "start": 105, "end": 132, "text": "(Stevenson and Merlo, 1999)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Clustering Methodology", "sec_num": "4" }, { "text": "The task of evaluating the result of a cluster analysis against the known gold standard of hand-constructed verb classes requires us to assess the similarity between two sets of equivalence relations. As noted by (Strehl et al., 2000) , it is useful to have an evaluation measure that does not depend on the choice of similarity measure or on the original dimensionality of the input data, since that allows meaningful comparison of results for which these parameters vary. This is similar to the perspective of (Vilain et al., 1995) , who present, in the context of the MUC co-reference evaluation scheme, a model-theoretic measure of the similarity between equivalence classes. (4) This manipulation is designed to remove the bias towards small clusters: 2 using the 57 verbs from our study we generated 50 random clusters for each cluster size between 1 and 57, and evaluated the results against the gold standard, returning the best result for each replication. We found that even using the scaling factor the measure favours smaller clusters. But this bias is strongest at the extremes of the range, and does not appear to impact too heavily on our results.", "cite_spans": [ { "start": 213, "end": 234, "text": "(Strehl et al., 2000)", "ref_id": "BIBREF21" }, { "start": 512, "end": 533, "text": "(Vilain et al., 1995)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Clustering Evaluation", "sec_num": "5" }, { "text": "Unfortunately none of Strehl et al's measures have all the properties which we intuitively require from a measure of linguistic cluster quality. For example, if we restrict attention to the case in which all verbs in an inferred cluster are drawn from the same actual class, we would like it to be the case that the evaluation measure is a monotonically increasing function of the size of the inferred cluster. We therefore introduced an additional, more suitable measure for the evaluation of individual clusters, based on the representation of equivalence classes as sets of pairs. It turns out that pairwise precision and recall have some of the counter-intuitive properties that we objected to in Strehl et al's measures, so we adjust pairwise precision with a scaling factor based on the size 9 ) not only when A is correct but also when ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Evaluation", "sec_num": "5" }, { "text": "Figures 1 and 2 summarise the two evaluation measures for overall cluster quality, showing the variation with the KL-based distance measures and with different strategies for seeding the initial cluster centres in the k-Means algorithm. Figure 1 displays quality scores referring to the coarse condition 1 subcategorisation frame types, Figure 2 refers to the clustering results obtained by verb descriptions based on the more delicate condition 2 subcategorisation frame types including PP information. Baseline values are 0.017 (APP) and 0.229 (MI), calculated as average on the evaluation of 10 random clusters. Optimum values, as calculated on the manual classification, are 0.291 (APP) and 0.493 (MI). The evaluation function is extremely non-linear, which leads to a severe loss of quality with the first few clustering mistakes, but does not penalise later mistakes to the same extent.", "cite_spans": [], "ref_spans": [ { "start": 237, "end": 245, "text": "Figure 1", "ref_id": null }, { "start": 337, "end": 345, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Clustering Evaluation", "sec_num": "5" }, { "text": "From the methodological point of view, the clustering evaluation gave interesting insights into k-Means' behaviour on the syntactic frame data. The more delicate verb-frame classification, i.e. the refinement of the syntactic verb frame descriptions by prepositional phrase specification, improved the clustering results. This does not go without saying: there was potential for a sparse data problem, since even frequent verbs can only be expected to inhabit a few frames. For example, the verb anfangen with a corpus frequency of 2,554 has zero counts for 138 of the 171 frames. Whether the improvement really matters in an application task is left to further research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Evaluation", "sec_num": "5" }, { "text": "We found that randomised starting clusters usually give better results than initialisation from a hierarchical clustering. Hierarchies imposing a strong structure on the clustering (such as single-linkage: the output clusterings contain few very large and many singleton clusters) are hardly improved by k-Means. Their evaluation results are noticeably below those for random clusters. But initialisation using Ward's method, which produces tighter clusters and a narrower range of cluster sizes does outperform random cluster initialisation. Presumably the issue is that the other hierarchical clustering methods place k-Means in a local minimum from which it cannot escape, and that uniformly shaped cluster initialisation gives k-Means a better chance of avoiding local minima, even with a high degree of perturbation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Evaluation", "sec_num": "5" }, { "text": "The clustering setup, proceeding and results provide a basis for a linguistic investigation concerning the German verbs, their syntactic properties and semantic classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Investigation", "sec_num": "6" }, { "text": "The following clustering result is an intuitively plausible semantic verb classification, accompanied by the cluster quality scores \" \u00a3 \u00a2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Investigation", "sec_num": "6" }, { "text": ", and class labels illustrating the majority vote of the verbs in the cluster. 3 The cluster analysis was obtained by running k-Means on a random cluster initialisation, with information radius as distance measure; the verb description contained condition 2 subcategorisation frame types with PP information. a) ahnen, vermuten, wissen (0.75) Propositional Attitude b) denken, glauben (0.33) Propositional Attitude c) anfangen, aufh\u00f6ren, beginnen, beharren, enden, insistieren, rudern (0.88) Aspect d) liegen, sitzen, stehen (0.75) Position e) dienen, folgen, helfen (0.75) Support f) nieseln, regnen, schneien (0.75) Weather g) d\u00e4mmern (0.00) Weather h) blitzen, donnern, segeln (0.25) Weather i) bestehen, fahren, fliegen, pochen (0.4) Insisting or Manner of Motion j) freuen, \u00e4rgern (0.33) Emotion k) essen, konsumieren, saufen, trinken, verk\u00fcnden (1.00) Consumption l) bringen, er\u00f6ffnen, lesen, liefern, schicken, schlie\u00dfen, vermitteln, \u00f6ffnen (0.78) Supply We compared the clustering to the gold standard and examined the underlying verb frame distributions. We undertook a series of post-hoc cluster analyses to explore the influence of specific frames and frame groups on the formation of verb classes, such as: what is the difference in the clustering result (on the same starting clusters) if we deleted all frame types containing an expletive es (frame types including x)? Space limitations allow us only a few insights.", "cite_spans": [ { "start": 79, "end": 80, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Linguistic Investigation", "sec_num": "6" }, { "text": "Clusters (a) and (b) are pure sub-classes of the semantic verb class Propositional Attitude. The verbs agree in their syntactic subcategorisation of a direct object (na) and finite clauses (ns-2, ns-dass); denken and glauben are assigned to a different cluster, because they also appear as intransitives, subcategorise the prepositional phrase Akk.an, and show especially strong probabilities for ns-2. Deleting na or frames containing s from the verb description destroys the coherent clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Investigation", "sec_num": "6" }, { "text": "Cluster (c) contains two sub-classes from Aspect and Insistence, polluted by the verb rudern 'to row'. All Aspect verbs show a 50% preference for an intransitive usage, and a minor 20% preference for the subcategorisation of non-finite clauses. By mistake, the infrequent verb rudern (corpus frequency 49) shows a similar preference for ni in its frame distribution and therefore appears within the same cluster as the Aspect verbs. The frame confusion has been caused by parsing mistakes for the infrequent verb; ni is not among the frames possibly subcategorised by rudern. Even though the verbs beharren and insistieren have characteristic frames np:Dat.auf and ns-2, they share an affinity for n with the aspect verbs. When eliminating n from the feature description of the verbs, the cluster is reduced to those verbs using ni.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Investigation", "sec_num": "6" }, { "text": "Cluster (d) is correct: Position. The syntactic usage of the three verbs is rather individual with strong probabilities for n, np:Dat.auf and np:Dat.in. Even the elimination of any of the three frame features does not cause a separation of the verbs in the clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Investigation", "sec_num": "6" }, { "text": "Cluster (j) represents the semantic class Emotion which, in German, has a highly characteristic signature in its strong association with reflexive frames; the cluster evaporates if we remove the distinctions made in the r feature group. zustellen in cluster (n) represents a singleton because of its extraordinarily strong preference ( 50%) for the ditransitive usage. Eliminating the frame from the verb description assigns zustellen to the same cluster as the other verbs of Transfer of Possession (Supply).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Investigation", "sec_num": "6" }, { "text": "Recall that we used two different sets of syntactic frames, the second of which makes more delicate distinctions in the area of prepositional phrases. As pointed out in Section 5, refining the syntactic verb information by PPs was helpful for the semantic clustering. But, contrary to our original intuitions, the detailed prepositional phrase information is less useful in the clustering of verbs with obligatory PP arguments than in the clustering of verbs where the PPs are optional; we performed a first test on the role of PP information: eliminating all PP information from the verb descriptions (not only the delicate PP information in condition 2, but also PP argument information in the coarse condition 1 frames) produced obvious deficiencies in most of the semantic classes, among them Weather and Support, whose verbs do not require PPs as arguments. A second test confirmed the finding: we augmented our coarsegrained verb frame repertoire with a much reduced set of PPs, those commonly assumed as argument PPs. This provides some but not all of the PP information in condition 2. The clustering result is deficient mainly in its classification of the verbs of Propositional Attitude, Support, Opening, and few of these subcategorise for PPs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Investigation", "sec_num": "6" }, { "text": "Clusters such as (k) to (l) suggest directions in which it might be desirable to subdivide the verb frames, for example by adding a limited amount of information about selectional preferences. Previous work has shown that sparse data issues preclude across the board incorporation of selectional information (Schulte im Walde, 2000), but a rough distinction such as physical object vs. abstraction on the direct object slot could, for example, help to split verk\u00fcnden from the other verbs in cluster (k).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Investigation", "sec_num": "6" }, { "text": "The linguistic investigation gives some insight into the reasons for the success of our (rather simple) clustering technique. We successfully exploited the connection between the syntactic behaviour of a verb and its meaning components. The clustering result shows a good match to the manually defined semantic verb classes, and in many cases it is clear which of and how the frames are influential in the creation of which clusters. We showed that we acquired implicit components of meaning through a syntactic extraction from a corpus, since the semantic verb classes are strongly related to the patterns in the syntactic descriptors. Everything in this study suggests that the move to larger datasets is an appropriate next move.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic Investigation", "sec_num": "6" }, { "text": "The paper presented the application of k-Means to the task of inducing semantic classes for German verbs. Based on purely syntactic probability distributions over verb subcategorisation frames, we obtained an intuitively plausible clustering of 57 verbs into 14 classes. The automatic clustering was evaluated against hand-constructed semantic verb classes. A series of post-hoc cluster analyses explored the influence of specific frames and frame groups on the coherence of the verb classes, and supported the tight connection between the syntactic behaviour of the verbs and their meaning components.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Future work will concern the extension of the clustering experiments to a larger number of verbs, both for the scientific purpose of refining our understanding of the semantic and syntactic status of verb classes and for the more applied goal of creating a large, reliable and high quality lexical resource for German. For this task, we will need to further refine our verb classes, further develop the repertoire of syntactic frames which we use, perhaps improve the statistical grammar from which the frames were extracted and find techniques which allow us to selectively include such information about selectional preferences as is warranted by the availability of training data and the capabilities of clustering technology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We also tried various transformations and variations of the probabilities, such as frequencies and binarisation, but none proved as effective as the probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In the absence of the penalty, mutual information would attain its maximum (which is the entropy of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Verbs that are part of the majority are shown in bold face, others in plain text. Where there is no clear majority, both class labels are given.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Role of Word Sense Disambiguation in Lexical Acquisition: Predicting Semantics from Syntactic Cues", "authors": [ { "first": "Bonnie", "middle": [ "J" ], "last": "Dorr", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Jones", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 16th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonnie J. Dorr and Doug Jones. 1996. Role of Word Sense Disambiguation in Lexical Acquisition: Predict- ing Semantics from Syntactic Cues. In Proceedings of the 16th International Conference on Computational Linguistics, Copenhagen, Denmark.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Large-Scale Dictionary Construction for Foreign Language Tutoring and Interlingual Machine Translation", "authors": [ { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" } ], "year": 1997, "venue": "Machine Translation", "volume": "12", "issue": "4", "pages": "271--322", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonnie Dorr. 1997. Large-Scale Dictionary Con- struction for Foreign Language Tutoring and Inter- lingual Machine Translation. Machine Translation, 12(4):271-322.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "DUDEN -Das Stilw\u00f6rterbuch. Number 2 in 'Duden in zw\u00f6lf B\u00e4nden", "authors": [], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dudenredaktion, editor. 2001. DUDEN -Das Stil- w\u00f6rterbuch. Number 2 in 'Duden in zw\u00f6lf B\u00e4nden'. Dudenverlag, Mannheim, 8th edition.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Cluster Analysis of Multivariate Data: Efficiency vs", "authors": [ { "first": "E", "middle": [ "W" ], "last": "Forgy", "suffix": "" } ], "year": 1965, "venue": "Interpretability of Classifications. Biometrics", "volume": "21", "issue": "", "pages": "768--780", "other_ids": {}, "num": null, "urls": [], "raw_text": "E.W. Forgy. 1965. Cluster Analysis of Multivariate Data: Efficiency vs. Interpretability of Classifications. Bio- metrics, 21:768-780.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Deutsche Grammatik. Langenscheidt -Verlag Enzyklop\u00e4die", "authors": [ { "first": "Gerhard", "middle": [], "last": "Helbig", "suffix": "" }, { "first": "Joachim", "middle": [], "last": "Buscha", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerhard Helbig and Joachim Buscha. 1998. Deutsche Grammatik. Langenscheidt -Verlag Enzyklop\u00e4die, 18th edition.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Finding Groups in Data -An Introduction to Cluster Analysis. Probability and Mathematical Statistics", "authors": [ { "first": "Leonard", "middle": [], "last": "Kaufman", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Rousseeuw", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leonard Kaufman and Peter J. Rousseeuw. 1990. Find- ing Groups in Data -An Introduction to Cluster Analy- sis. Probability and Mathematical Statistics. John Wi- ley and Sons, Inc.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The Role of Verbs in Document Analysis", "authors": [ { "first": "Judith", "middle": [ "L" ], "last": "Klavans", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 17th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judith L. Klavans and Min-Yen Kan. 1998. The Role of Verbs in Document Analysis. In Proceedings of the 17th International Conference on Computational Linguistics, Montreal, Canada.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Acquiring Lexical Generalizations from Corpora: A Case Study for Diathesis Alternations", "authors": [ { "first": "Maria", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "397--404", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Lapata. 1999. Acquiring Lexical Generalizations from Corpora: A Case Study for Diathesis Alterna- tions. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 397-404.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "On the Effectiveness of the Skew Divergence for Statistical Language Analysis", "authors": [ { "first": "Lillian", "middle": [ "Lee" ], "last": "", "suffix": "" } ], "year": 2001, "venue": "Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lillian Lee. 2001. On the Effectiveness of the Skew Di- vergence for Statistical Language Analysis. Artificial Intelligence and Statistics, pages 65-72.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "English Verb Classes and Alternations", "authors": [ { "first": "Beth", "middle": [], "last": "Levin", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beth Levin. 1993. English Verb Classes and Alterna- tions. The University of Chicago Press, Chicago, 1st edition.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Lexical Acquisition at the Syntax-Semantics Interface: Diathesis Alternations, Subcategorization Frames and Selectional Preferences", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana McCarthy. 2001. Lexical Acquisition at the Syntax-Semantics Interface: Diathesis Alternations, Subcategorization Frames and Selectional Prefer- ences. Ph.D. thesis, University of Sussex.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Using a Probabilistic Class-Based Lexicon for Lexical Ambiguity Resolution", "authors": [ { "first": "Detlef", "middle": [], "last": "Prescher", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "Mats", "middle": [], "last": "Rooth", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 18th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Detlef Prescher, Stefan Riezler, and Mats Rooth. 2000. Using a Probabilistic Class-Based Lexicon for Lexical Ambiguity Resolution. In Proceedings of the 18th In- ternational Conference on Computational Linguistics, Saarbr\u00fccken.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Lexicalized Stochastic Modeling of Constraint-Based Grammars using Log-Linear Measures and EM Training", "authors": [ { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "Detlef", "middle": [], "last": "Prescher", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Kuhn", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Riezler, Detlef Prescher, Jonas Kuhn, and Mark Johnson. 2000. Lexicalized Stochastic Modeling of Constraint-Based Grammars using Log-Linear Mea- sures and EM Training. In Proceedings of the 38th", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, Hong Kong.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Lopar: Design and Implementation. Arbeitspapiere des Sonderforschungsbereichs 340 Linguistic Theory and the Foundations of Computational Linguistics 149", "authors": [ { "first": "Helmut", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Helmut Schmid. 2000. Lopar: Design and Implemen- tation. Arbeitspapiere des Sonderforschungsbereichs 340 Linguistic Theory and the Foundations of Com- putational Linguistics 149, Institut f\u00fcr Maschinelle Sprachverarbeitung, Universit\u00e4t Stuttgart.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Statistical Grammar Models and Lexicon Acquisition", "authors": [ { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Schmid", "suffix": "" }, { "first": "Mats", "middle": [], "last": "Rooth", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "Detlef", "middle": [], "last": "Prescher", "suffix": "" } ], "year": 2001, "venue": "Linguistic Form and its Computation. CSLI Publications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Schulte im Walde, Helmut Schmid, Mats Rooth, Stefan Riezler, and Detlef Prescher. 2001. Statistical Grammar Models and Lexicon Acquisition. In Chris- tian Rohrer, Antje Rossdeutscher, and Hans Kamp, ed- itors, Linguistic Form and its Computation. CSLI Pub- lications, Stanford, CA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Clustering Verbs Semantically According to their Alternation Behaviour", "authors": [ { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 18th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "747--753", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Schulte im Walde. 2000. Clustering Verbs Se- mantically According to their Alternation Behaviour. In Proceedings of the 18th International Conference on Computational Linguistics, pages 747-753, Saar- br\u00fccken, Germany.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Evaluating Verb Subcategorisation Frames learned by a German Statistical Grammar against Manual Definitions in the Duden Dictionary", "authors": [ { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 10th EURALEX International Congress", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Schulte im Walde. 2002a. Evaluating Verb Sub- categorisation Frames learned by a German Statisti- cal Grammar against Manual Definitions in the Duden Dictionary. In Proceedings of the 10th EURALEX In- ternational Congress, Copenhagen, Denmark. To ap- pear.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A Subcategorisation Lexicon for German Verbs induced from a Lexicalised PCFG", "authors": [ { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 3rd Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Schulte im Walde. 2002b. A Subcategorisation Lexicon for German Verbs induced from a Lexicalised PCFG. In Proceedings of the 3rd Conference on Lan- guage Resources and Evaluation, Las Palmas de Gran Canaria, Spain. To appear.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Verben in Feldern. de Gruyter", "authors": [ { "first": "Helmut", "middle": [], "last": "Schumacher", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Helmut Schumacher. 1986. Verben in Feldern. de Gruyter, Berlin.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Automatic Verb Classification Using Distributions of Grammatical Features", "authors": [ { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" }, { "first": "Paola", "middle": [], "last": "Merlo", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 9th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "45--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suzanne Stevenson and Paola Merlo. 1999. Automatic Verb Classification Using Distributions of Grammati- cal Features. In Proceedings of the 9th Conference of the European Chapter of the Association for Compu- tational Linguistics, pages 45-52.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Impact of Similarity Measures on Web-page Clustering", "authors": [ { "first": "Alexander", "middle": [], "last": "Strehl", "suffix": "" }, { "first": "Joydeep", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 17th National Conference on Artificial Intelligence (AAAI 2000): Workshop of Artificial Intelligence for Web Search", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Strehl, Joydeep Ghosh, and Raymond Mooney. 2000. Impact of Similarity Measures on Web-page Clustering. In Proceedings of the 17th National Conference on Artificial Intelligence (AAAI 2000): Workshop of Artificial Intelligence for Web Search, Austin, Texas.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A Model-Theoretic Coreference Scoring Scheme", "authors": [ { "first": "Marc", "middle": [], "last": "Vilain", "suffix": "" }, { "first": "John", "middle": [], "last": "Burger", "suffix": "" }, { "first": "John", "middle": [], "last": "Aberdeen", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 6th Message Understanding Conference", "volume": "", "issue": "", "pages": "45--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A Model- Theoretic Coreference Scoring Scheme. In Proceed- ings of the 6th Message Understanding Conference, pages 45-52, San Francisco.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Clasificaci\u00f3n Verbal: Alternancias de Di\u00e1tesis. Number 3 in Quaderns de Sintagma", "authors": [ { "first": "Gloria", "middle": [], "last": "V\u00e1zquez", "suffix": "" }, { "first": "Ana", "middle": [], "last": "Fern\u00e1ndez", "suffix": "" }, { "first": "Irene", "middle": [], "last": "Castell\u00f3n", "suffix": "" }, { "first": "M", "middle": [ "Antonia" ], "last": "Mart\u00ed", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gloria V\u00e1zquez, Ana Fern\u00e1ndez, Irene Castell\u00f3n, and M. Antonia Mart\u00ed. 2000. Clasificaci\u00f3n Verbal: Al- ternancias de Di\u00e1tesis. Number 3 in Quaderns de Sin- tagma. Universitat de Lleida.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "For measuring the quality of an individual cluster, the cluster purity of each cluster biased towards small clusters, with the extreme case of singleton clusters, which is an undesired property for our (linguistic) needs.To capture the quality of a whole clustering, Strehl et al. combine the mutual information between \"" }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "Cluster quality variation based on condition 2 verb descriptions m) ank\u00fcndigen, beenden, bekanntgeben, bekommen, beschreiben, charakterisieren, darstellen, erhalten, erlangen, interpretieren, kriegen, unterst\u00fctzen (1.00) Description and Obtaining n) zustellen (0.00) Supply" }, "TABREF1": { "html": null, "type_str": "table", "text": "Probability distribution for glauben", "num": null, "content": "" }, "TABREF2": { "html": null, "type_str": "table", "text": "", "num": null, "content": "
(for probability val-
" }, "TABREF3": { "html": null, "type_str": "table", "text": "Refined np distribution for reden", "num": null, "content": "" } } } }