ACL-OCL / Base_JSON /prefixN /json /N16 /N16-1039.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N16-1039",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:37:25.849565Z"
},
"title": "Distinguishing Literal and Non-Literal Usage of German Particle Verbs",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "K\u00f6per",
"suffix": "",
"affiliation": {},
"email": "maximilian.koeper@ims.uni-stuttgart.de"
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte",
"suffix": "",
"affiliation": {},
"email": "schulte@ims.uni-stuttgart.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper provides a binary, token-based classification of German particle verbs (PVs) into literal vs. non-literal usage. A random forest improving standard features (e.g., bagof-words; affective ratings) with PV-specific information and abstraction over common nouns significantly outperforms the majority baseline. In addition, PV-specific classification experiments demonstrate the role of shared particle semantics and semantically related base verbs in PV meaning shifts.",
"pdf_parse": {
"paper_id": "N16-1039",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper provides a binary, token-based classification of German particle verbs (PVs) into literal vs. non-literal usage. A random forest improving standard features (e.g., bagof-words; affective ratings) with PV-specific information and abstraction over common nouns significantly outperforms the majority baseline. In addition, PV-specific classification experiments demonstrate the role of shared particle semantics and semantically related base verbs in PV meaning shifts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic detection of non-literal expressions (including metaphors and idioms) is critical for many natural language processing (NLP) tasks such as information extraction, machine translation, and sentiment analysis. For this reason, the last decade has seen an increase in research on identifying literal vs. non-literal meaning (Birke and Sarkar, 2006; Birke and Sarkar, 2007; Li and Sporleder, 2009; Turney et al., 2011; Shutova et al., 2013; Tsvetkov et al., 2014) , as well as the establishment of workshops on metaphorical language in NLP. 1 In this paper, we explore the prediction of literal vs. non-literal language usage of a computationally challenging class of multiword expressions: German particle verbs (PVs) such as anlachen (laugh at) are compositions of a base verb 1 sites.google.com/site/metaphorinnlp2016/ (BV) such as lachen (smile/laugh) and a verb particle such as an. German PVs are highly productive (Springorum et al., 2013b; Springorum et al., 2013a) , and the particles are notoriously ambiguous (Lechler and Ro\u00dfdeutscher, 2009; Haselbach, 2011; Springorum, 2011) . Furthermore, the particles often trigger (regular) meaning shifts when they combine with base verbs (Springorum et al., 2013b) , so the resulting PVs represent frequent cases of non-literal meaning. The contributions of this paper are as follows:",
"cite_spans": [
{
"start": 331,
"end": 355,
"text": "(Birke and Sarkar, 2006;",
"ref_id": "BIBREF4"
},
{
"start": 356,
"end": 379,
"text": "Birke and Sarkar, 2007;",
"ref_id": "BIBREF5"
},
{
"start": 380,
"end": 403,
"text": "Li and Sporleder, 2009;",
"ref_id": "BIBREF19"
},
{
"start": 404,
"end": 424,
"text": "Turney et al., 2011;",
"ref_id": "BIBREF35"
},
{
"start": 425,
"end": 446,
"text": "Shutova et al., 2013;",
"ref_id": "BIBREF26"
},
{
"start": 447,
"end": 469,
"text": "Tsvetkov et al., 2014)",
"ref_id": "BIBREF33"
},
{
"start": 547,
"end": 548,
"text": "1",
"ref_id": null
},
{
"start": 927,
"end": 953,
"text": "(Springorum et al., 2013b;",
"ref_id": "BIBREF30"
},
{
"start": 954,
"end": 979,
"text": "Springorum et al., 2013a)",
"ref_id": "BIBREF29"
},
{
"start": 1026,
"end": 1058,
"text": "(Lechler and Ro\u00dfdeutscher, 2009;",
"ref_id": "BIBREF17"
},
{
"start": 1059,
"end": 1075,
"text": "Haselbach, 2011;",
"ref_id": "BIBREF13"
},
{
"start": 1076,
"end": 1093,
"text": "Springorum, 2011)",
"ref_id": "BIBREF31"
},
{
"start": 1196,
"end": 1222,
"text": "(Springorum et al., 2013b)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We present a random forest classifier that correctly identifies 86.8% of literal vs. non-literal language usage within a novel dataset of 6 436 annotated sentences, in comparison to a majority baseline of 64.9%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. We successfully incorporate salient PVspecific features and noun clusters in addition to standard bag-of-words features and affective ratings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. We demonstrate that PVs with semantically similar particles and semantically similar base verbs can predict each others' literal vs. non-literal language usage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "4. We illustrate the potential and the limits of the most salient classification features in predicting PV non-literal language usage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the remainder of this paper we describe previous work on non-literal language identification and computational models of German particle verbs (Section 2), before we introduce our dataset on German particle verbs (Section 3), the particle verb features (Section 4), and the experiments, results and analyses (Section 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous work relevant to this paper includes research on identifying non-literal language usage, and computational work on (German) particle verb meaning. Birke and Sarkar (2006) , Birke and Sarkar (2007) , Li and Sporleder (2009) and performed binary token-based classifications for English datasets, relying on various contextual indicators. Birke & Sarkar exploited seed sets of literal vs. non-literal sentences, and used distributional similarity to classify English verbs. Li & Sporleder defined two models of text cohesion (a cohesion chain and a cohesion graph) to classify V+NP and V+PP combinations. Shutova et al. (2013) performed both metaphor identification and interpretion (by paraphrasing), focusing on English verbs. She relied on a seed set of annotated metaphors and standard verb and noun clustering, to classify literal vs. metaphorical verb senses. Gedigian et al. (2006) also predicted metaphorical meanings of English verb tokens, however heavily relying on manual rather than unsupervised data (i.e. labeled sentences and PropBank annotation) and a maximum entropy classifier. Turney et al. (2011) assumed that metaphorical word usage is correlated with the degree of abstractness of the word's context, and classified word senses in a given context as either literal or metaphorical. Their targets were adjective-noun combinations and verbs. Tsvetkov et al. (2014) presented a language-independent approach to metaphor identification. They used affective ratings, Word-Net categories and vector-space word representations to train a metaphor-detecting classifier on English samples, and then applied it to a different target language using bilingual dictionaries.",
"cite_spans": [
{
"start": 156,
"end": 179,
"text": "Birke and Sarkar (2006)",
"ref_id": "BIBREF4"
},
{
"start": 182,
"end": 205,
"text": "Birke and Sarkar (2007)",
"ref_id": "BIBREF5"
},
{
"start": 208,
"end": 231,
"text": "Li and Sporleder (2009)",
"ref_id": "BIBREF19"
},
{
"start": 611,
"end": 632,
"text": "Shutova et al. (2013)",
"ref_id": "BIBREF26"
},
{
"start": 872,
"end": 894,
"text": "Gedigian et al. (2006)",
"ref_id": "BIBREF12"
},
{
"start": 1103,
"end": 1123,
"text": "Turney et al. (2011)",
"ref_id": "BIBREF35"
},
{
"start": 1369,
"end": 1391,
"text": "Tsvetkov et al. (2014)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Computational research on particle verbs was initially concerned with the automatic acquisition of particle verbs from corpora (Baldwin and Villavicencio, 2002; Baldwin, 2005; Villavicencio, 2005) .",
"cite_spans": [
{
"start": 127,
"end": 160,
"text": "(Baldwin and Villavicencio, 2002;",
"ref_id": "BIBREF0"
},
{
"start": 161,
"end": 175,
"text": "Baldwin, 2005;",
"ref_id": "BIBREF2"
},
{
"start": 176,
"end": 196,
"text": "Villavicencio, 2005)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Identification of non-literal language usage:",
"sec_num": null
},
{
"text": "Afterwards, the main focus has been on modelling the degree of compositionality of particle verbs as based on distributional features (McCarthy et al., 2003; Baldwin et al., 2003; Bannard, 2005) . All these approaches were type-based, and predicting the compositionality was mainly concerned with PV-BV similarity, not taking the contribution of the particle into account. In cases where the particle semantics was respected (such as Bannard (2005) ), the results were disappointing because modelling particle senses is still an unsolved problem.",
"cite_spans": [
{
"start": 134,
"end": 157,
"text": "(McCarthy et al., 2003;",
"ref_id": "BIBREF21"
},
{
"start": 158,
"end": 179,
"text": "Baldwin et al., 2003;",
"ref_id": "BIBREF1"
},
{
"start": 180,
"end": 194,
"text": "Bannard, 2005)",
"ref_id": "BIBREF3"
},
{
"start": 434,
"end": 448,
"text": "Bannard (2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Identification of non-literal language usage:",
"sec_num": null
},
{
"text": "Regarding German particle verbs, there has also been a focus on modelling PV compositionality (K\u00fchner and Schulte im Walde, 2010; Bott and Schulte im Walde, 2014; Bott and Schulte im Walde, 2015). As in English, the approaches were all type-based and mainly concerned with PV-BV similarity. Another line of research categorized particle meanings by relating formal semantic definitions to automatic classifications (R\u00fcd, 2012; Springorum et al., 2012) . Furthermore, Springorum et al. (2013b) recently provided a corpusbased study on regular meaning shift conditions for German particle verbs.",
"cite_spans": [
{
"start": 415,
"end": 426,
"text": "(R\u00fcd, 2012;",
"ref_id": "BIBREF23"
},
{
"start": 427,
"end": 451,
"text": "Springorum et al., 2012)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Identification of non-literal language usage:",
"sec_num": null
},
{
"text": "We selected 165 particle verbs across 10 particles, based on previous experiments and datasets that incorporated German particle verbs with regular meaning shifts, various degrees of ambiguity, and across frequency ranges (Springorum et al., 2013b; Springorum et al., 2013a; Bott and Schulte im Walde, 2015) . For the 165 PVs, we randomly extracted 50 sentences from DECOW14AX, a German web corpus containing 12 billion tokens (Sch\u00e4fer and Bildhauer, 2012; Sch\u00e4fer, 2015) . The sentences were morphologically annotated and parsed using SMOR (Faa\u00df et al., 2010) , Mar-MoT (M\u00fcller et al., 2013) and the MATE dependency parser (Bohnet, 2010) . Combining partof-speech and dependency information, we were able to reliably sample both separated and nonseparated PV occurrences (\"Der Ast bricht ab\" vs. \"Der Ast ist abgebrochen\").",
"cite_spans": [
{
"start": 222,
"end": 248,
"text": "(Springorum et al., 2013b;",
"ref_id": "BIBREF30"
},
{
"start": 249,
"end": 274,
"text": "Springorum et al., 2013a;",
"ref_id": "BIBREF29"
},
{
"start": 275,
"end": 307,
"text": "Bott and Schulte im Walde, 2015)",
"ref_id": null
},
{
"start": 427,
"end": 456,
"text": "(Sch\u00e4fer and Bildhauer, 2012;",
"ref_id": "BIBREF24"
},
{
"start": 457,
"end": 471,
"text": "Sch\u00e4fer, 2015)",
"ref_id": "BIBREF25"
},
{
"start": 541,
"end": 560,
"text": "(Faa\u00df et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 571,
"end": 592,
"text": "(M\u00fcller et al., 2013)",
"ref_id": "BIBREF22"
},
{
"start": 624,
"end": 638,
"text": "(Bohnet, 2010)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Particle Verb Dataset",
"sec_num": "3"
},
{
"text": "Three German native speakers with a linguistic background annotated each of the 8 128 sen-tences 2 on a 6-point scale [0, 5] , ranging from clearly literal (0) to clearly non-literal (5) usage. The total agreement of the annotators on all six categories was 43%, Fleiss' \u03ba = 0.35. Dividing the scale into two disjunctive ranges with three categories each ([0, 2] and [3, 5] ), the total agreement of the annotators on the two categories was 79%, Fleiss' \u03ba = 0.70. In the experiments we used the binary-class distinction, and disregarded all cases of disagreement. This final dataset comprises 6 436 sentences: 4 174 literal and 2 262 nonliteral uses across 159 particle verbs and 10 particles. 3 Figure 1 shows the distribution of literal and non-literal sentences across the particles. Literal Non-Literal Figure 1 : Lit/Non-lit distribution across particles.",
"cite_spans": [
{
"start": 118,
"end": 121,
"text": "[0,",
"ref_id": null
},
{
"start": 122,
"end": 124,
"text": "5]",
"ref_id": null
},
{
"start": 355,
"end": 373,
"text": "([0, 2] and [3, 5]",
"ref_id": null
},
{
"start": 694,
"end": 695,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 696,
"end": 704,
"text": "Figure 1",
"ref_id": null
},
{
"start": 807,
"end": 815,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Particle Verb Dataset",
"sec_num": "3"
},
{
"text": "Our feature space includes standard features to detect non-literal language uses (bags-of-words and affective ratings) as well as PV-specific features and abstraction over common nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Particle Verb Features",
"sec_num": "4"
},
{
"text": "As a standard feature in vector space models, we used all words in the particle verb sentences, i.e., a bag-of-words model relying on unigrams. We expected this standard information to be useful, because some words such as the abstract noun Hoffnung (hope) and the concrete noun Geld (money) frequently occur with non-literal rather than literal language usage:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unigrams",
"sec_num": "4.1"
},
{
"text": "1. (non-lit.) \"Die Hoffnung keimte fr\u00fch auf.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unigrams",
"sec_num": "4.1"
},
{
"text": "That hope arose (lit: sprouted) early.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unigrams",
"sec_num": "4.1"
},
{
"text": "2. (non-lit.) \"Er versucht das Geld abzugraben.\" He tries to demand (lit: dig off) the money.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unigrams",
"sec_num": "4.1"
},
{
"text": "To overcome data sparseness, we did not use the unigrams as individual features (|V | = feature space), but implemented this feature as the output of a text-classifier. We relied on the Multinomial Naive Bayes (MNB) classifier by McCallum and Nigam (1998) . While the classifier was designed for document classification, we considered a sentence as a document and the possible class outcomes were literal and non-literal.",
"cite_spans": [
{
"start": 230,
"end": 255,
"text": "McCallum and Nigam (1998)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unigrams",
"sec_num": "4.1"
},
{
"text": "Noun Clusters Because of the severe data sparseness in our PV feature sets, we performed noun generalization and applied the generalized information to all nouns in our PV contexts. Using all approx. 430 000 nouns that appeared >100 times in the DECOW14AX corpus, we applied k-Means clustering with k \u2208 [2, 10 000]. As an alternative to the standard unigrams, we then replaced every noun in the PV sentences with its corresponding cluster tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unigrams",
"sec_num": "4.1"
},
{
"text": "Previous work on detecting non-literal language often makes use of psycholinguistic attributes, namely abstractness and concreteness ratings (Turney et al., 2011) , and imageability ratings (Tsvetkov et al., 2014) . Words with high abstractness ratings refer to entities that cannot be perceived with our senses; a large subset of which are non-visual (i.e., receive low imageability). It has been shown that non-literal expressions tend to occur with abstract words (dark humor versus dark hair). We thus expected affective ratings to be useful for particle verbs as well:",
"cite_spans": [
{
"start": 141,
"end": 162,
"text": "(Turney et al., 2011)",
"ref_id": "BIBREF35"
},
{
"start": 190,
"end": 213,
"text": "(Tsvetkov et al., 2014)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Affective Ratings",
"sec_num": "4.2"
},
{
"text": "1. (lit.) \"Den Lippenstift kannst du dir abschminken.\" You can remove the lipstick.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Affective Ratings",
"sec_num": "4.2"
},
{
"text": "2. (non-lit.) \"'Den Job kannst du dir abschminken.\" You can forget about the job.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Affective Ratings",
"sec_num": "4.2"
},
{
"text": "We reimplemented the algorithm from (Turney and Littman, 2003) to create large-scale abstractness and imageability ratings for German (K\u00f6per and Schulte im Walde, 2016) . Based on these ratings, we defined the following (partially redundant) features for the PV sentential contexts: ",
"cite_spans": [
{
"start": 36,
"end": 62,
"text": "(Turney and Littman, 2003)",
"ref_id": "BIBREF34"
},
{
"start": 134,
"end": 168,
"text": "(K\u00f6per and Schulte im Walde, 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Affective Ratings",
"sec_num": "4.2"
},
{
"text": "Particle verbs with a meaning shift are noncompositional regarding their base verbs. We thus implemented a PV-specific feature that measures the distributional fit of PVs and their BVs in the PV contexts. For example, looking at the following two PV sentences containing the BV klingen (to sound), the context of the first, literal sentence fits well to the BV meaning, but the context of the second, non-literal sentence does not. The distributional fit of the BV in the literal context should therefore be high, but the distributional fit of the BV in the non-literal context should be low.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Fit of PV, BV and Context",
"sec_num": "4.3"
},
{
"text": "1. (lit.) \"Der Ton der Gitarre klingt aus.\" The tone of the guitar fades.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Fit of PV, BV and Context",
"sec_num": "4.3"
},
{
"text": "2. (non-lit.) \"Den Abend lassen wir mit Wein ausklingen.\" We end the evening with wine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Fit of PV, BV and Context",
"sec_num": "4.3"
},
{
"text": "To measure the distributional fit of PVs and BVs to PV contexts, we created 400-dimensional word representations using the hyperwords toolkit (Levy et al., 2015) and the DECOW14AX corpus. We relied on a symmetrical window of size 3 and applied positive pointwise mutual (PPMI) feature weighting together with singular value decomposition (SVD). Based on the word representations, we calculated cosine similarities between the PVs and their contexts, and likewise between the respective BVs and the PV contexts. The contexts we used were the same seven dimensions we used for the affective ratings (cf. Section 4.2). For example, regarding the sentence \"Die Katze springt auf den Tisch\" (The cat jumps on the table), we calculated the distributional similarity between the PV \"aufspringen\" and the subject \"Katze\", and the distributional similarity between the BV \"springen\" and the subject \"Katze\", etc. Each PV-context and each BV-context dimension represents an individual feature.",
"cite_spans": [
{
"start": 142,
"end": 161,
"text": "(Levy et al., 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Fit of PV, BV and Context",
"sec_num": "4.3"
},
{
"text": "In this section, we present a series of binary classification experiments to distinguish literal and non-literal PV usage. Section 5.1 presents the main experiments comparing our features in a global classification setup, and Section 5.2 presents PVspecific additional experiments that zoom into the role of particle types and into the role of semantically related PVs and BVs. Section 5.3 provides a qualitative analysis of the features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Experiments",
"sec_num": "5"
},
{
"text": "We used a random forest with multiple (in our case 100) random decision trees, 4 with each tree voting for an overall classification result. The unigram information was represented by stacking the output of a multinomial naive bayes text classifier as a single feature into the random forest. For all machine learning algorithms we relied on the WEKA toolkit (Witten et al., 2011) .",
"cite_spans": [
{
"start": 359,
"end": 380,
"text": "(Witten et al., 2011)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Main Experiments",
"sec_num": "5.1"
},
{
"text": "The experiments were performed in two modes, (a) without knowledge of the particle (i.e., the individual particle was not provided as a feature), and (b) with explicit knowledge of the particle. In this way, we could identify the contribution of the particle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Experiments",
"sec_num": "5.1"
},
{
"text": "The classification results are shown in Table 1 . We report on the feature type, and on the size 5 of the feature set f . We further present literal and non-literal f-scores F 1 , and accuracy with and without particle knowledge. We compare against the majority baseline (literal). The right-most columns indicate whether the differences in performance are statistically significant, using the \u03c7 2 The results demonstrate that the classification results across all feature types are significantly better than the majority baseline. The single best performing feature type (cf. lines 1-6) is the unigram information; in combination with the particle information (+P ), the distributional PV/BVcontext fit is best. Combining the best feature types (2+4+6) once more improves the results, and ditto when adding noun cluster information. 6 We can also see that abstractness (AC) ratings outperform imageability (IMG) ratings.",
"cite_spans": [
{
"start": 834,
"end": 835,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Main Experiments",
"sec_num": "5.1"
},
{
"text": "| f | Lit. F 1 Non-Lit. F 1 Acc. Acc. + P 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Type",
"sec_num": null
},
{
"text": "So overall, the best performing feature set successfully combines unigrams that incorporate clusters for noun generalization; abstractness ratings; and PV-specific information regarding the distributional PV/BV-context fit and knowledge about the particle. This setup correctly classifies literal sentences with an f-score of 88.8 and nonliteral sentences with an f-score of 77.3; overall accuracy is 86.8 over a baseline of 64.9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Type",
"sec_num": null
},
{
"text": "It is difficult to compare our results against previous approaches on different datasets and in different languages. Regarding the closest approaches to our work, Tsvetkov et al. (2014) report an accuracy score of 82.0 using 10-fold cross-validation on a training dataset with a majority baseline of 59.2, combining multiple lexical semantic features on a dataset of 1 609 English subject-verb-object triples. Birke and Sarkar (2007) trained a single classifier for each of twentyfive verbs in the English TROFI verb dataset and reported only an average f-score: 64.9 against a 6 The best cluster analysis in our experiments contained 750 noun clusters. majority baseline of 62.9. Turney et al. (2011) obtained an average f-score of 63.9 and additionally report an accuracy score of 73.4 on the same dataset, using abstractness ratings.",
"cite_spans": [
{
"start": 163,
"end": 185,
"text": "Tsvetkov et al. (2014)",
"ref_id": "BIBREF33"
},
{
"start": 410,
"end": 433,
"text": "Birke and Sarkar (2007)",
"ref_id": "BIBREF5"
},
{
"start": 578,
"end": 579,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Type",
"sec_num": null
},
{
"text": "In contrast to our work, the two approaches by Birke and Sarkar (2007) and Turney et al. (2011) treated each group of sentences for a given target verb as a separate learning problem, while we learn one classifier across different verbs. Our method 4 (AC Ratings) can be considered a German re-implementation of the approach by Turney et al. (2011) . In comparison to the results of previous work, our approach can safely be considered state-of-the-art.",
"cite_spans": [
{
"start": 47,
"end": 70,
"text": "Birke and Sarkar (2007)",
"ref_id": "BIBREF5"
},
{
"start": 75,
"end": 95,
"text": "Turney et al. (2011)",
"ref_id": "BIBREF35"
},
{
"start": 328,
"end": 348,
"text": "Turney et al. (2011)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Type",
"sec_num": null
},
{
"text": "One traditional line of research to identify typebased multiword collocations or idiomatic expressions relies on the association strength between the multiword parts Stevenson et al., 2004) : The stronger the association between the parts of a multiword expression (as determined by raw frequency, some variant of mutual information, etc.), the stronger the collocation/idiomaticity of the combination of the parts. Based on this assumption, we calculated the association strength between PVs and their contextual subjects/objects, using local mutual information (LMI), cf. Evert (2005) . The LMI scores were based on type-based frequency counts in the DECOW14AX corpus and added as features to the respective contexts, assuming that large LMI scores indicate non-literal PV usage.",
"cite_spans": [
{
"start": 166,
"end": 189,
"text": "Stevenson et al., 2004)",
"ref_id": "BIBREF32"
},
{
"start": 574,
"end": 586,
"text": "Evert (2005)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Standard Measures of Multiword Idiomaticity",
"sec_num": "5.2.1"
},
{
"text": "Adding the LMI values to the overall best feature set from the main experiments decreased accuracy from 86.8 \u2192 86.0. Using the LMI association strength values of the PV-subject and PVobject pairs by themselves provided slightly but non-significantly better results in comparison to the majority baseline: 65.9 > 64.9. 7 Manual investigations revealed that verb-noun pairs with high LMI scores represent collocations in many cases, but the collocations are not only used in nonliteral language but also in literal language, e.g., \"Sendung ausstrahlen\" (\"broadcast a program\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Standard Measures of Multiword Idiomaticity",
"sec_num": "5.2.1"
},
{
"text": "In order to explore the predicability of literal vs. non-literal uses with respect to specific particles, we trained the best classifier from the main experiments on all particle verbs with particle X and applied the classifier to all particle verbs with particle Y . Our hypothesis was that pairs of particles with similar ambiguities might predict each other better than pairs with different particle meanings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Literality across Particles",
"sec_num": "5.2.2"
},
{
"text": "This PV-specific setup could also be applied within a PV group with the same particle: We trained the classifier on all PVs with particle X except for one, and then applied the trained classifier to the missing PV with particle X . The setup was repeated for all PVs with particle X , and the average accuracy was calculated. Figure 2 provides the results as a heat map, with red indicating high and blue indicating low accuracy scores. The vertical particles on the left correspond to the training particles, and the horizontal particles at the bottom correspond to the test particles. The bottom line shows the majority baseline. For example, training a classifier on \"ein\" PVs and evaluating it on \"aus\" PVs results in an accuracy of 76.56, which is significantly better ( * * * for p < 0.001) than the baseline for \"aus\" (65.55).",
"cite_spans": [],
"ref_spans": [
{
"start": 326,
"end": 334,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Non-Literality across Particles",
"sec_num": "5.2.2"
},
{
"text": "The diagonal in the heat map (showing the within-particle setup) provides particularly high accuracy scores, so the PVs with the same particle predict (non-)literality within the group very well. This demonstrates that the meanings and the meaning shifts across PVs with the same particle (e.g., aufdecken and auftischen) are quite reg-ular. A comparably strong prediction is found between \"vor\" (before/in front of) and \"nach\" (after/behind), with both particles carrying highly similar temporal and local senses. Other examples of strongly related antonymous particle pairs are \"auf\"/\"zu\", \"ein\"/\"aus\", and \"aus\"/\"an\". Examples of strongly related synonymous particle pairs are \"an\"/\"ein\", and \"aus\"/\"zu\". \"durch\" correlates poorly with all other particles, which is probably due to the few sentences we collected from the corpus. \"mit\" also correlates poorly with all other particles, because it is the only particle with little ambiguity. So overall, the heat map corresponds to intuitions about semantic relatedness across particle pairs. : Train a classifier on PVs with particle X and test it on PVs with particle Y .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Literality across Particles",
"sec_num": "5.2.2"
},
{
"text": "An even more fine-grained experiment setting explored the predictability of a specific particle verb based on the classifier trained on a different particle verb. Our hypothesis was that pairs of PVs that predict each other particularly well share some meaning aspects, either (i) because the training and the test verb share the same BV (SameBV: abgraben:aufgraben), or (ii) the PVs are synonymous according to the German Duden 8 dictionary (PVSyn: auftragen:auftischen), or (iii) because the BVs of two PVs with identical particles are synonymous according to the Duden (BVSyn: aufreissen:aufplatzen). Figure 3 shows the f-scores for predicting literality and non-literality across the three settings, in comparison to the main experiments (\"All\"). The number of PV pairs in the settings and the majority accuracy for these PV pairs are also provided, because the experiment sets differ in size. We can see that PVs with the same BV (SameBV) predict each other's classifications well regarding literal but not regarding non-literal sentences. This behaviour illustrates the contribution of the particle to the PV meaning: The same BVs with different particles potentially differ strongly, if the particles do not agree on one or more senses. Synonymous PVs (PVSyn) predict each other as well in literal as in non-literal cases. Since the PVs in all cases are supposed to have the same meaning, this behaviour is also reasonable. An increase in both literal and non-literal F1 is reached for PV pairs with the same particle and synonymous BVs (BVSyn), because the BVs are supposed to carry the same meaning, and the identical particles trigger similar meaning shifts. Overall, the experiment demonstrates that synonymous verbs undergo similar meaning shifts, and that a particle initiates similar meaning shifts when applied to synonymous BVs. ",
"cite_spans": [],
"ref_spans": [
{
"start": 604,
"end": 612,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Non-Literality across Particle Verbs",
"sec_num": "5.2.3"
},
{
"text": "In the final part of the paper, we perform a qualitative analysis of the most salient features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Indicators of Non-Literality",
"sec_num": "5.3"
},
{
"text": "First of all, we looked into the feature space by computing the information gain within the best random forest classifier. The information gain (I-Gain) provides the improvement in information entropy regarding our feature space and the class labels, as defined by equation 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Gain",
"sec_num": "5.3.1"
},
{
"text": "I-Gain(Class,Feat) = H (Class) \u2212 H (Class|Feat) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Gain",
"sec_num": "5.3.1"
},
{
"text": "The information gain does not take feature interaction into account, but determines the importance of the individual features. Applying this method reveals the three most salient features: unigrams (0.31), abstractness ratings of the context nouns (0.17), and distributional fit of the base verbs (0.11). The information gain therefore confirms our results from the main experiments, where these three features worked best.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Gain",
"sec_num": "5.3.1"
},
{
"text": "In addition, we noticed that for all features higher weights were given to dimensions that depend on nouns (such as the common nouns in the PV contexts, and the subject and object nouns), in comparison to proper names, verbs, adjectives and adverbs. For example, the abstractness ratings of the adverbs were ranked second lowest with a score of 0.005, and the distributional fit between BVs and adjectives was ranked last with a zero score, indicating that this feature provides no additional information for our dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Gain",
"sec_num": "5.3.1"
},
{
"text": "We now take a look at the distributional fit feature, which was the best performing feature in the main experiments, when combined with particle knowledge. Figure 4 focusing on the distributional fit between BVs and common nouns (as determined third best by the information gain) confirms that the feature is helpful in distinguishing literal vs. non-literal PV sentences across particles: The medians in the boxplots for literal sentences are clearly above those for non-literal sentences. The plots confirm that BVs can be exploited to identify compositional uses of PVs (which in turn refer to literal usage).",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 164,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Distributional Fit",
"sec_num": "5.3.2"
},
{
"text": "Looking into individual PVs confirms that this feature distinguishes well between the literal and non-literal sentences. On the other hand, we also find PVs where this feature is not able to identify non-literal language use. Figure 5 presents the boxplots with cosine values for aufbl\u00fchen (blossom out) and auflodern (burn up), where the feature works well, in comparison to absaufen (drown), where the feature cannot distinguish (non-)literal language usage. ",
"cite_spans": [],
"ref_spans": [
{
"start": 226,
"end": 234,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Distributional Fit",
"sec_num": "5.3.2"
},
{
"text": "Finally, we take a look at the abstractness feature, which was also among the best performing features in the main experiments, and which is generally assumed to represent a salient indicator of non-literal language usage. Figure 6 focusing on the abstractness of common nouns in the PV sentences 9 (as determined second best by the information gain) confirms that the feature is also helpful in distinguishing literal vs. non-literal PV sentences across particles: Again, the medians in the boxplots for literal sentences are clearly above those for non-literal sentences. The plots confirm 9 High values indicate concreteness. that contextual abstractness is a salient indicator of non-literal language usage. Looking into individual PVs again confirms that this feature distinguishes well between the literal and non-literal sentences but also that there are PVs where this feature is not able to identify nonliteral language use. Figure 7 presents the boxplots with abstractness ratings for anstauen (accumulate) and durchsickern (leak through), where the feature works well, in comparison to antanzen (waltz in) and especially ausklingen (fade/finish), where the feature cannot distinguish (non-)literal language usage. Two example sentences where the abstractness feature goes wrong for a good reason are as follows. In (1) \"Aber wir sollten doch um f\u00fcnf zum Essen antanzen.\" (But we should show up (lit: waltz in) for dinner at five), the context nouns are concrete (we; dinner) but the language usage is nonliteral. In contrast, in (2) \"Ich liebe Emotionen, deshalb summen alle mit.\" (I love emotions, there-fore everyone hums along), the object noun in the sentence is highly abstract (emotion), but the language usage is literal. These examples illustrate that contextual abstractness is not a perfect indicator of non-literal language usage.",
"cite_spans": [
{
"start": 592,
"end": 593,
"text": "9",
"ref_id": null
}
],
"ref_spans": [
{
"start": 223,
"end": 231,
"text": "Figure 6",
"ref_id": "FIGREF6"
},
{
"start": 934,
"end": 942,
"text": "Figure 7",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Abstractness of Contexts",
"sec_num": "5.3.3"
},
{
"text": "We presented a classifier that predicts literal vs. non-literal language usage for German particle verbs, a semantically challenging type of multiword expressions. The classifier significantly outperformed the baseline by improving standard features with noun clusters and a PV-specific distributional fit feature. PV-specific experiments indicated that PVs whose particles share aspects of ambiguity and which incorporate semantically related BVs seem to undergo similar meaning shifts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Some PVs appeared < 50 times in the corpus.3 The dataset is accessible from http://www.ims. uni-stuttgart.de/data/pv_nonlit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Experiments with other classification methods showed similar but inferior performance. Simple Logistic Regression performed 2nd best.5 Remember from Section 4.1 that the unigram information is based on all tokens (12 427) but we implemented the unigrams as a single feature (using the output of a classifier), thus the combined setting is only based on 22 features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also experimented with the other five contextual feature dimensions, but the results were even worse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.duden.de",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research was supported by the DFG Collaborative Research Centre SFB 732 (Maximilian K\u00f6per) and the DFG Heisenberg Fellowship SCHU-2580/1 (Sabine Schulte im Walde).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Extracting the Unextractable: A Case Study on Verb Particles",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Aline",
"middle": [],
"last": "Villavicencio",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 6th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "98--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Baldwin and Aline Villavicencio. 2002. Ex- tracting the Unextractable: A Case Study on Verb Particles. In Proceedings of the 6th Conference on Computational Natural Language Learning, pages 98-104, Taipei, Taiwan.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An Empirical Model of Multiword Expression Decomposability",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Bannard",
"suffix": ""
},
{
"first": "Takaaki",
"middle": [],
"last": "Tanaka",
"suffix": ""
},
{
"first": "Dominic",
"middle": [],
"last": "Widdows",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the ACL Workshop on Multiword Expressions: Analysis, Acquisition and Treatment",
"volume": "",
"issue": "",
"pages": "89--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Baldwin, Colin Bannard, Takaaki Tanaka, and Dominic Widdows. 2003. An Empirical Model of Multiword Expression Decomposability. In Proceed- ings of the ACL Workshop on Multiword Expressions: Analysis, Acquisition and Treatment, pages 89-96, Sapporo, Japan.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Deep Lexical Acquisition of Verb-Particle Constructions",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2005,
"venue": "Computer Speech and Language",
"volume": "19",
"issue": "",
"pages": "398--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Baldwin. 2005. Deep Lexical Acquisition of Verb-Particle Constructions. Computer Speech and Language, 19:398-414.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning about the Meaning of Verb-Particle Constructions from Corpora",
"authors": [
{
"first": "Collin",
"middle": [],
"last": "Bannard",
"suffix": ""
}
],
"year": 2005,
"venue": "Computer Speech and Language",
"volume": "19",
"issue": "",
"pages": "467--478",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collin Bannard. 2005. Learning about the Meaning of Verb-Particle Constructions from Corpora. Com- puter Speech and Language, 19:467-478.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Clustering Approach for the Nearly Unsupervised Recognition of Nonliteral Language",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Birke",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Sarkar",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 11th Conference of the European Chapter of the ACL",
"volume": "",
"issue": "",
"pages": "329--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Birke and Anoop Sarkar. 2006. A Clustering Ap- proach for the Nearly Unsupervised Recognition of Nonliteral Language. In Proceedings of the 11th Con- ference of the European Chapter of the ACL, pages 329-336, Trento, Italy.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Active Learning for the Identification of Nonliteral Language",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Birke",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Sarkar",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Workshop on Computational Approaches to Figurative Language",
"volume": "",
"issue": "",
"pages": "21--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Birke and Anoop Sarkar. 2007. Active Learn- ing for the Identification of Nonliteral Language. In Proceedings of the Workshop on Computational Approaches to Figurative Language, pages 21-28, Rochester, NY.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Top Accuracy and Fast Dependency Parsing is not a Contradiction",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "89--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernd Bohnet. 2010. Top Accuracy and Fast Depen- dency Parsing is not a Contradiction. In Proceed- ings of the 23rd International Conference on Compu- tational Linguistics, pages 89-97, Beijing, China.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Optimizing a Distributional Semantic Model for the Prediction of German Particle Verb Compositionality",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Bott",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 9th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "509--516",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Bott and Sabine Schulte im Walde. 2014. Opti- mizing a Distributional Semantic Model for the Pre- diction of German Particle Verb Compositionality. In Proceedings of the 9th International Conference on Language Resources and Evaluation, pages 509-516, Reykjavik, Iceland.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Exploiting Fine-grained Syntactic Transfer Features to Predict the Compositionality of German Particle Verbs",
"authors": [],
"year": null,
"venue": "Proceedings of the 11th Conference on Computational Semantics",
"volume": "",
"issue": "",
"pages": "34--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Exploiting Fine-grained Syntactic Transfer Features to Predict the Compositionality of German Particle Verbs. In Proceedings of the 11th Conference on Com- putational Semantics, pages 34-39, London, UK.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Methods for the Qualitative Evaluation of Lexical Association Measures",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Evert",
"suffix": ""
},
{
"first": "Brigitte",
"middle": [],
"last": "Krenn",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "188--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Evert and Brigitte Krenn. 2001. Methods for the Qualitative Evaluation of Lexical Association Mea- sures. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics, pages 188-195, Toulouse, France. Stefan Evert. 2005. The Statistics of Word Co- Occurrences: Word Pairs and Collocations. Ph.D. thesis, Institut f\u00fcr Maschinelle Sprachverarbeitung, Universit\u00e4t Stuttgart.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Design and Application of a Gold Standard for Morphological Analysis: SMOR in Validation",
"authors": [
{
"first": "Gertrud",
"middle": [],
"last": "Faa\u00df",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "Heid",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 7th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "803--810",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gertrud Faa\u00df, Ulrich Heid, and Helmut Schmid. 2010. Design and Application of a Gold Standard for Mor- phological Analysis: SMOR in Validation. In Proceed- ings of the 7th International Conference on Language Resources and Evaluation, pages 803-810, Valletta, Malta.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Catching Metaphors",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gedigian",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "Srini",
"middle": [],
"last": "Narayanan",
"suffix": ""
},
{
"first": "Branimir",
"middle": [],
"last": "Ciric",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 3rd Workshop on Scalable Natural Language Understanding",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Gedigian, John Bryant, Srini Narayanan, and Bra- nimir Ciric. 2006. Catching Metaphors. In Proceed- ings of the 3rd Workshop on Scalable Natural Lan- guage Understanding, pages 41-48, New York City, NY.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Deconstructing the Meaning of the German Temporal Verb Particle 'nach' at the Syntax-Semantics Interface",
"authors": [
{
"first": "Boris",
"middle": [],
"last": "Haselbach",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of Generative Grammar in",
"volume": "",
"issue": "",
"pages": "71--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boris Haselbach. 2011. Deconstructing the Meaning of the German Temporal Verb Particle 'nach' at the Syntax-Semantics Interface. In Proceedings of Gen- erative Grammar in Geneva, pages 71-92, Geneva, Switzerland.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatically generated affective norms of abstractness, arousal, imageability and valence for 350000 german lemmas",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "K\u00f6per",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian K\u00f6per and Sabine Schulte im Walde. 2016. Automatically generated affective norms of abstract- ness, arousal, imageability and valence for 350000 german lemmas. In Proceedings of the 10th Interna- tional Conference on Language Resources and Evalu- ation, Portoro\u017e, Slovenia.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Can we do better than Frequency? A Case Study on Extracting PP-Verb Collocations",
"authors": [
{
"first": "Brigitte",
"middle": [],
"last": "Krenn",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Evert",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the ACL Workshop on Collocations",
"volume": "",
"issue": "",
"pages": "39--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brigitte Krenn and Stefan Evert. 2001. Can we do better than Frequency? A Case Study on Extracting PP-Verb Collocations. In Proceedings of the ACL Workshop on Collocations, pages 39-46, Toulouse, France.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Determining the Degree of Compositionality of German Particle Verbs by Clustering Approaches",
"authors": [
{
"first": "Natalie",
"middle": [],
"last": "K\u00fchner",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 10th Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "47--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natalie K\u00fchner and Sabine Schulte im Walde. 2010. Determining the Degree of Compositionality of Ger- man Particle Verbs by Clustering Approaches. In Proceedings of the 10th Conference on Natural Lan- guage Processing, pages 47-56, Saarbr\u00fccken, Ger- many.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "German Particle Verbs with auf. Reconstructing their Composition in a DRT-based Framework",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Lechler",
"suffix": ""
},
{
"first": "Antje",
"middle": [],
"last": "Ro\u00dfdeutscher",
"suffix": ""
}
],
"year": 2009,
"venue": "Linguistische Berichte",
"volume": "220",
"issue": "",
"pages": "439--478",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Lechler and Antje Ro\u00dfdeutscher. 2009. German Particle Verbs with auf. Reconstructing their Com- position in a DRT-based Framework. Linguistische Berichte, 220:439-478.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Improving Distributional Similarity with Lessons learned from Word Embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "211--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving Distributional Similarity with Lessons learned from Word Embeddings. Transactions of the Association for Computational Linguistics, 3:211- 225.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Classifier Combination for Contextual Idiom Detection Without Labelled Data",
"authors": [
{
"first": "Linlin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Sporleder",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "315--323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linlin Li and Caroline Sporleder. 2009. Classifier Com- bination for Contextual Idiom Detection Without La- belled Data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 315-323, Singapore.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A Comparison of Event Models for Naive Bayes Text Classification",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Kamal",
"middle": [],
"last": "Nigam",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the AAAI Workshop on Learning for Text Categorization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew McCallum and Kamal Nigam. 1998. A Com- parison of Event Models for Naive Bayes Text Clas- sification. In Proceedings of the AAAI Workshop on Learning for Text Categorization.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Detecting a Continuum of Compositionality in Phrasal Verbs",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the ACL Workshop on Multiword Expressions: Analysis, Acquisition and Treatment",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana McCarthy, Bill Keller, and John Carroll. 2003. De- tecting a Continuum of Compositionality in Phrasal Verbs. In Proceedings of the ACL Workshop on Mul- tiword Expressions: Analysis, Acquisition and Treat- ment, pages 73-80, Sapporo, Japan.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Efficient Higher-Order CRFs for Morphological Tagging",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "322--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas M\u00fcller, Helmut Schmid, and Hinrich Sch\u00fctze. 2013. Efficient Higher-Order CRFs for Morpholog- ical Tagging. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 322-332, Seattle, WA, USA.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Untersuchung der distributionellen Eigenschaften der Lesarten der Partikel 'auf ' mittels Clustering-Methoden",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "R\u00fcd",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan R\u00fcd. 2012. Untersuchung der distributionellen Eigenschaften der Lesarten der Partikel 'auf ' mit- tels Clustering-Methoden. Master's thesis, Insti- tut f\u00fcr Maschinelle Sprachverarbeitung, Universit\u00e4t Stuttgart.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Building Large Corpora from the Web Using a New Efficient Tool Chain",
"authors": [
{
"first": "Roland",
"middle": [],
"last": "Sch\u00e4fer",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Bildhauer",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "486--493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roland Sch\u00e4fer and Felix Bildhauer. 2012. Building Large Corpora from the Web Using a New Efficient Tool Chain. In Proceedings of the 8th International Conference on Language Resources and Evaluation, pages 486-493, Istanbul, Turkey.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Processing and Querying Large Web Corpora with the COW14 Architecture",
"authors": [
{
"first": "Roland",
"middle": [],
"last": "Sch\u00e4fer",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd Workshop on Challenges in the Management of Large Corpora",
"volume": "",
"issue": "",
"pages": "28--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roland Sch\u00e4fer. 2015. Processing and Querying Large Web Corpora with the COW14 Architecture. In Pi- otr Ba\u0144ski, Hanno Biber, Evelyn Breiteneder, Marc Kupietz, Harald L\u00fcngen, and Andreas Witt, editors, Proceedings of the 3rd Workshop on Challenges in the Management of Large Corpora, pages 28 -34.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Statistical Metaphor Processing",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "39",
"issue": "",
"pages": "301--353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ekaterina Shutova, Simone Teufel, and Anna Korhonen. 2013. Statistical Metaphor Processing. Computa- tional Linguistics, 39(2):301-353.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Unsupervised Recognition of Literal and Non-Literal Use of Idiomatic Expressions",
"authors": [
{
"first": "Caroline",
"middle": [],
"last": "Sporleder",
"suffix": ""
},
{
"first": "Linlin",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the ACL",
"volume": "",
"issue": "",
"pages": "754--762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caroline Sporleder and Linlin Li. 2009. Unsupervised Recognition of Literal and Non-Literal Use of Id- iomatic Expressions. In Proceedings of the 12th Con- ference of the European Chapter of the ACL, pages 754-762, Athens, Greece.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Automatic Classification of German an Particle Verbs",
"authors": [
{
"first": "Sylvia",
"middle": [],
"last": "Springorum",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "Antje",
"middle": [],
"last": "Ro\u00dfdeutscher",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sylvia Springorum, Sabine Schulte im Walde, and An- tje Ro\u00dfdeutscher. 2012. Automatic Classification of German an Particle Verbs. In Proceedings of the 8th International Conference on Language Resources and Evaluation, pages 73-80, Istanbul, Turkey.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Sentence Generation and Compositionality of Systematic Neologisms of German Particle Verbs",
"authors": [
{
"first": "Sylvia",
"middle": [],
"last": "Springorum",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "Antje",
"middle": [],
"last": "Ro\u00dfdeutscher",
"suffix": ""
}
],
"year": 2013,
"venue": "Talk at the Conference on Quantitative Investigations in Theoretical Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sylvia Springorum, Sabine Schulte im Walde, and An- tje Ro\u00dfdeutscher. 2013a. Sentence Generation and Compositionality of Systematic Neologisms of Ger- man Particle Verbs. Talk at the Conference on Quan- titative Investigations in Theoretical Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Regular Meaning Shifts in German Particle Verbs: A Case Study",
"authors": [
{
"first": "Sylvia",
"middle": [],
"last": "Springorum",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Utt",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 10th International Conference on Computational Semantics",
"volume": "",
"issue": "",
"pages": "228--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sylvia Springorum, Jason Utt, and Sabine Schulte im Walde. 2013b. Regular Meaning Shifts in German Particle Verbs: A Case Study. In Proceedings of the 10th International Conference on Computational Se- mantics, pages 228-239, Potsdam, Germany.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "DRT-based Analysis of the German Verb Particle \"an",
"authors": [
{
"first": "Sylvia",
"middle": [],
"last": "Springorum",
"suffix": ""
}
],
"year": 2011,
"venue": "Leuvense Bijdragen",
"volume": "97",
"issue": "",
"pages": "80--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sylvia Springorum. 2011. DRT-based Analysis of the German Verb Particle \"an\". Leuvense Bijdragen, 97:80-105.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Statistical Measures of the Semi-Productivity of Light Verb Constructions",
"authors": [
{
"first": "Suzanne",
"middle": [],
"last": "Stevenson",
"suffix": ""
},
{
"first": "Afsaneh",
"middle": [],
"last": "Fazly",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "North",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2nd Workshop on Multiword Expressions: Integrating Processing",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suzanne Stevenson, Afsaneh Fazly, and Ryan North. 2004. Statistical Measures of the Semi-Productivity of Light Verb Constructions. In Proceedings of the 2nd Workshop on Multiword Expressions: Integrating Processing, pages 1-8, Barcelona, Spain.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Metaphor Detection with Cross-Lingual Model Transfer",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Leonid",
"middle": [],
"last": "Boytsov",
"suffix": ""
},
{
"first": "Anatole",
"middle": [],
"last": "Gershman",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nyberg",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "248--258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor Detection with Cross-Lingual Model Transfer. In Proceedings of the 52nd Annual Meeting of the Association for Com- putational Linguistics, pages 248-258.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Measuring Praise and Criticism: Inference of Semantic Orientation from Association",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"L"
],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Littman",
"suffix": ""
}
],
"year": 2003,
"venue": "ACM Transactions on Information Systems",
"volume": "21",
"issue": "4",
"pages": "315--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney and Michael L. Littman. 2003. Mea- suring Praise and Criticism: Inference of Semantic Orientation from Association. ACM Transactions on Information Systems, 21(4):315-346.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Literal and Metaphorical Sense Identification through Concrete and Abstract Context",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "Yair",
"middle": [],
"last": "Neuman",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Assaf",
"suffix": ""
},
{
"first": "Yohai",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "680--690",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Turney, Yair Neuman, Dan Assaf, and Yohai Co- hen. 2011. Literal and Metaphorical Sense Identi- fication through Concrete and Abstract Context. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 680-690, Ed- inburgh, UK.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "The Availability of Verb-Particle Constructions in Lexical Resources: How much is enough?",
"authors": [
{
"first": "Aline",
"middle": [],
"last": "Villavicencio",
"suffix": ""
}
],
"year": 2005,
"venue": "Computer Speech and Language",
"volume": "19",
"issue": "",
"pages": "415--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aline Villavicencio. 2005. The Availability of Verb- Particle Constructions in Lexical Resources: How much is enough? Computer Speech and Language, 19:415-432.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Data Mining: Practical Machine Learning Tools and Techniques",
"authors": [
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"A"
],
"last": "Hall",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian H. Witten, Eibe Frank, and Mark A. Hall. 2011. Data Mining: Practical Machine Learning Tools and Tech- niques. Morgan Kaufmann Publishers.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "* */* -/-\u2022/* */* 7 */* */* */* */* */* */* 8 */* */* */* */* */* */* -/-(b) Statistical significance of differences Acc/Acc + P .",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "aus durch ein mit nach vor zu Avg. : * * * for p < 0.001, and * * for p < 0.01 and * for p < 0.05.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Figure 2: Train a classifier on PVs with particle X and test it on PVs with particle Y .",
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"text": "Prediction for semantically related PVs.",
"uris": null
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"text": "Distributional fit of BVs and context nouns in (non-)literal sentences across particles. Example PVs and their distributional fit of BVs and context nouns in (non-)literal use.",
"uris": null
},
"FIGREF6": {
"type_str": "figure",
"num": null,
"text": "Average abstractness ratings of context nouns in (non-)literal sentences across particles.",
"uris": null
},
"FIGREF7": {
"type_str": "figure",
"num": null,
"text": "Example PVs and their average abstractness ratings of context nouns in (non-)literal use.",
"uris": null
},
"TABREF3": {
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Main classification results. test and * for p < 0.001 and \u2022 for p < 0.05 to mark significance."
}
}
}
}