| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:42:38.142744Z" |
| }, |
| "title": "Supervised Disambiguation of German Verbal Idioms with a BiLSTM Architecture", |
| "authors": [ |
| { |
| "first": "Rafael", |
| "middle": [], |
| "last": "Ehren", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Heinrich Heine University", |
| "location": { |
| "settlement": "D\u00fcsseldorf", |
| "country": "Germany" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Timm", |
| "middle": [], |
| "last": "Lichte", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of T\u00fcbingen", |
| "location": { |
| "settlement": "T\u00fcbingen", |
| "country": "Germany" |
| } |
| }, |
| "email": "timm.lichte@uni-tuebingen.de" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Kallmeyer", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Heinrich Heine University", |
| "location": { |
| "settlement": "D\u00fcsseldorf", |
| "country": "Germany" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Jakub", |
| "middle": [], |
| "last": "Waszczuk", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Heinrich Heine University", |
| "location": { |
| "settlement": "D\u00fcsseldorf", |
| "country": "Germany" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Supervised disambiguation of verbal idioms (VID) poses special demands on the quality and quantity of the annotated data used for learning and evaluation. In this paper, we present a new VID corpus for German and perform a series of VID disambiguation experiments on it. Our best classifier, based on a neural architecture, yields an error reduction across VIDs of 57% in terms of accuracy compared to a simple majority baseline.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Supervised disambiguation of verbal idioms (VID) poses special demands on the quality and quantity of the annotated data used for learning and evaluation. In this paper, we present a new VID corpus for German and perform a series of VID disambiguation experiments on it. Our best classifier, based on a neural architecture, yields an error reduction across VIDs of 57% in terms of accuracy compared to a simple majority baseline.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Figurative language is not just a momentary product of creativity and associative processes, but a vast number of metaphors, metonyms, etc. have become conventionalized and are part of every speaker's lexicon. Still, in most cases, they can simultaneously be understood in a non-figurative, literal way, however implausible this reading might be. Take, for example, the following sentence:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "He is in the bathroom and talks to Huey on the big white telephone.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The verbal phrase talk to Huey on the big white telephone can be understood as a figurative euphemism for being physically sick. But it could also be taken literally to describe an act of remote communication with a person called Huey. Despite the ambiguity, a speaker of English will most probably choose the figurative reading in (1), also because of the presence of certain syntactic cues such as the adjective sequence big white or the use of telephone instead of, for example, mobile. Omitting such cues generally makes the reader more hesitant at selecting the figurative meaning. There is thus a strong connection of non-literal meaning and properties pertaining to the form of the expression, which is characterstic for what Baldwin and Kim (2010) call an idiom. Since the figurative expression in (1) consists of a verb and its syntactic arguments, we will furthermore call it a Verbal Idiom (VID) adapting the terminology in Ramisch et al. (2018) . While it is safe to assume that the VID talk to Huey on the big white telephone almost never occurs with a literal reading, this does not hold for all idioms. The expression break the ice for example can easily convey both a literal (The trawler broke the ice) and a non-literal meaning (The welcome speech broke the ice) depending on the subject. Although recent work suggests that literal occurrences of VIDs generally are quite rare in comparison to the idiomatic ones (Savary et al., 2019) , it remains a qualitatively major problem with the risk of serious errors due to wrong disambiguation.", |
| "cite_spans": [ |
| { |
| "start": 733, |
| "end": 755, |
| "text": "Baldwin and Kim (2010)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 935, |
| "end": 956, |
| "text": "Ramisch et al. (2018)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1431, |
| "end": 1452, |
| "text": "(Savary et al., 2019)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "However, tackling this problem with supervised learning poses special demands on the learning and test data in order to be successful. Most importantly, since the semantic and morphosyntactic properties of VID types (and idioms in general) are very diverse and idiosyncratic, the data must contain a sufficient number of tokens of both the literal and non-literal readings for each VID. In addition, each token should allow access to the context because the context can provide important hints as to the intended reading.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we investigate the supervised disambiguation of potential occurrences of German VIDs. For training and evaluation, we have created COLF-VID (Corpus of Literal and Figurative Readings of Verbal Idioms), a German annotated corpus of literal and semantically idiomatic occurrences of 34 preselected VID types. Altogether, we have collected 6985 sentences with candidate occurrences that have been semantically annotated by three annotators with high inter-annotator agreement. The annotations overall show a relatively low idiomaticity rate of 77.55 %, while the idiomaticity rates of the single VIDs vary greatly. The derived corpus is made available under the Creative Commons Attribution-ShareAlike 4.0 International license. 1 To the best of our knowledge, it represents the largest available collection of German VIDs annotated on token-level.", |
| "cite_spans": [ |
| { |
| "start": 741, |
| "end": 742, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Furthermore, we report on disambiguation experiments using COLF-VID in order to establish a first baseline on this corpus. These experiments use a neural architecture with different pretrained word representations as inputs. Compared to a simple majority baseline, the best classifier yields an error reduction across VIDs of 57% in terms of accuracy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this section, we discuss previous work on the creation of token-level corpora of VID types. Cook et al. (2007) draw on syntactic properties of multiword expressions to perform token-level classification of certain VID types. To this end they created a dataset of 2984 instances drawn from the BNC (British National Corpus), covering 53 different verb-noun idiomatic combination (VNIC) types (Cook et al., 2008) . The annotation tag set includes the labels LITERAL, IDIOMATIC and UNKNOWN which correspond to three of the four labels used for COLF-VID, albeit the conditions for the application of UNKNOWN where a bit different, since the annotators only had access to one sentence per instance. The overall reported unweighted Kappa score, calculated on the dev and test set, is 0.76. Split decisions were discussed among the two judges to receive a final annotation.", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 113, |
| "text": "Cook et al. (2007)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 394, |
| "end": 413, |
| "text": "(Cook et al., 2008)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VID Resources", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The VU Amsterdam Metaphor Corpus (Steen et al., 2010) is currently probably the largest manually annotated corpus of non-literal language and is freely available. It comprises roughly 200,000 English sentences from different genres and provides annotations basically for all non-functional words following a refined version of the Metaphor Identification Procedure (MIP) (Pragglejaz Group, 2007) . Regarding only verbs, this yields an impressive overall number of 37962 tokens with 18.7% \"metaphor-related\" readings (Steen et al., 2010; Herrmann, 2013) . Due to its general purpose and the lack of lexical filtering, however, this is hardly comparable with COLF-VID.", |
| "cite_spans": [ |
| { |
| "start": 33, |
| "end": 53, |
| "text": "(Steen et al., 2010)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 383, |
| "end": 395, |
| "text": "Group, 2007)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 516, |
| "end": 536, |
| "text": "(Steen et al., 2010;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 537, |
| "end": 552, |
| "text": "Herrmann, 2013)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VID Resources", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The IDIX (IDioms In Context) corpus created by Sporleder et al. (2010) can be seen as the English counterpart of COLF-VID. It is an add-on to the BNC XML Edition and contains 5836 annotated instances of 78 pre-selected VIDs mainly of the form V+NP and V+PP. As for our corpus, expressions were favoured that presumably had a high literality rate. The employed tag set was more or less identical with ours. Quite remarkably, and in stark contrast to COLF-VID and other comparable corpora, the literal occurrences in the IDIX corpus represent the majority class with 49.4% (vs. 45.4% instances being tagged as NON-LITERAL). They report a Kappa score of 0.87 which was evaluated using 1,136 instances that were annotated independently by two annotators.", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 70, |
| "text": "Sporleder et al. (2010)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VID Resources", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Fritzinger et al. (2010) conduct a survey on a German dataset similar to ours. They extracted 9700 instances of 77 potentially idiomatic prepositionnoun-verb triples from two different corpora. Two annotators independently classified the candidates according to whether they were used literally or idiomatically in a given context. The tag set also included an AMBIGUOUS label, but, as was the case with Cook et al. (2008) , only single sentences were available as context to determine the correct reading. An agreement rate of 97.9% was computed on the basis of 6,690 instances. The biggest difference to our and other presented corpora is the very high idiomaticity rate of 96.12%. However, this dataset does not seem to be publicly available. Horbach et al. (2016) are concerned with German infitive-verb compounds such as sitzen lassen ('let sit'\u21d2'leave someone'), i.e. verb groups with an idiomatic reading that consist of an inflected head verb and an infinitive modifier. In order to conduct experiments on automatic detection and disambigution of these kinds of VIDs they created a corpus of 6000 instances of 6 different infinitiveverb compounds which were annotated by two experts with the label set LITERAL, IDIOMATIC and ? (for undecidable). In contrast to Cook et al. (2008) and Fritzinger et al. (2010) , a context of one sentence to the left and one sentence to the right of the candidate was taken into account. The annotation process proved to be especially challenging since some of the examined compounds had several literal and figurative meanings. Nevertheless, they achieved high agreement values of (0.6 < \u03ba < 0.8) or (\u03ba > 0.8) for most expressions with a mean idiomaticity rate of 65.5%. 2", |
| "cite_spans": [ |
| { |
| "start": 404, |
| "end": 422, |
| "text": "Cook et al. (2008)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 746, |
| "end": 767, |
| "text": "Horbach et al. (2016)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 1269, |
| "end": 1287, |
| "text": "Cook et al. (2008)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 1292, |
| "end": 1316, |
| "text": "Fritzinger et al. (2010)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VID Resources", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Even though literal occurrences of VIDs seem to be a rare phenomenon (Savary et al., 2019) , it is still desirable to account for them, i.e. to disambiguate between idiomatic and literal reading. It may be a quantitatively minor problem, but qualitatively it continues to be a major challenge for NLP, for instance for machine translation systems.", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 90, |
| "text": "(Savary et al., 2019)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VID Disambiguation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "VIDs exhibit a variety of properties exploitable for determining the correct reading of a candidate expression. On the morphosyntactic level a lot of VIDs are less flexible than their literal counterparts, e.g. the idiomatic kick the bucket is not readily passivizable. On the semantic level VIDs often disrupt the cohesion of a sentence, because of their non-compositionality, or they violate selectional preferences, for example in the sentence The city shows its teeth.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VID Disambiguation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Examples for a morphosyntactic approach are the works of Cook et al. (2007) and Fazly et al. (2009) . They show that it is possible to leverage automatically acquired knowledge about the syntactic behaviour of VNICs, i.e. their syntactic fixedness, to perform token-level disambiguation. Katz and Giesbrecht (2006) draw on semantic properties by using dense word vectors to identify literal and idiomatic occurrences of the German VID ins Wasser fallen (idiomatically 'to be cancelled', literally 'to fall into the water'). They assumed that the contexts of the literal and idiomatic use of this expression differ which in turn is represented by their distributional vectors. Test instances are then compared to these vectors in order to classify them. Li and Sporleder (2009) and Ehren (2017) both used cohesion-based graphs for the disambiguation task, the assumption being that semantically idiomatic expressions disrupt the cohesion of the context they appear in. The former used Normalized Google Distance, while the latter used the cosine between word embeddings to capture the semantic similarity of words. To classify the test instances in an unsupervised way, graphs were built based on the two mentioned metrics and if the mean value rose after the removal of the instance, it was classified as idiomatic. Shutova et al. (2010) and Haagsma and Bjerva (2016) employ the knowledge that metaphors tend to violate selectional preferences to detect them in running text.", |
| "cite_spans": [ |
| { |
| "start": 57, |
| "end": 75, |
| "text": "Cook et al. (2007)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 80, |
| "end": 99, |
| "text": "Fazly et al. (2009)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 288, |
| "end": 314, |
| "text": "Katz and Giesbrecht (2006)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 753, |
| "end": 776, |
| "text": "Li and Sporleder (2009)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 1316, |
| "end": 1337, |
| "text": "Shutova et al. (2010)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 1342, |
| "end": 1367, |
| "text": "Haagsma and Bjerva (2016)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VID Disambiguation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Building on these insights from previous work, in this paper, we will use a BiLSTM architecture based on different types of word embeddings that is intended to capture the semantic properties of the VID itself, together with the context and the morphosyntactic flexibility of the specific VID instance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VID Disambiguation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "3 The Creation of the Corpus", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VID Disambiguation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "As mentioned above, literal occurrences of VIDs usually seem to occur quite rarely. The German dataset of the PARSEME 1.1 corpus (Ramisch et al., 2018) consists of 8996 sentences with 1341 instances of VIDs. These 1341 instances have an idiomaticity rate of 98%, i.e. the whole dataset only includes a handful of literal occurrences. Training and evaluating a classifier with such an imbalance of classes would prove rather difficult. Thus, it is not feasible to gather a sufficient amount of data by selecting sentences at random -at least if human resources are limited -and it is not possible to build a huge dataset so that the natural occurrence rate will give us enough literal readings. In order to alleviate the data sparsity, we hand-picked a number of VID types with presumably high numbers of literal occurrences. Afterwards we extracted sentences (along with their contexts) from the German newspaper corpus T\u00fcPP-D/Z 3 that contained the lexical components of our VID types as lemmas.", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 151, |
| "text": "(Ramisch et al., 2018)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We then manually filtered out coincidental occurrences with an undesired coarse syntactic structure (Savary et al., 2019) , leaving us with only valid candidates for our corpus. Table 1 shows the 34 different types. One thing that immediately stands out is the fact that most of the pre-chosen VID types (26 to be exact) consist of a prepositional phrase (PP) and a verb. The rest consists of verbnoun combinations with the noun in direct object position. Another salient property of this dataset is the high variance with respect to the number of candidates per type. For the VID an Glanz verlieren ('loose sheen'\u21d2'loose attractivity'), we only found 5 instances, while auf dem Tisch liegen ('lay on the table'\u21d2'be topic') is represented by 951 candidates.", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 121, |
| "text": "(Savary et al., 2019)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 178, |
| "end": 185, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Besides the labels LITERAL, IDIOMATIC we also use the labels UNDECIDABLE and BOTH in cases where an expression can be seen as LITERAL and IDIOMATIC at the same time for different reasons. As to UNDECIDABLE, the disambiguation of an expressions is not possible due to the lack of context. For instance, this is notoriously difficult for metonymic expressions whose literal meaning describes a bodily action that typically co-occurs with the idiomatic meaning. An example of that is the German expression sich die Haare raufen ('to scuffle one's hair'\u21d2'to be worried/upset'): A person that is upset can often be seen scuffling their hair. 4 By contrast, the label BOTH applies to cases where the literal and idiomatic readings seem to be both intended, as illustrated in (2):", |
| "cite_spans": [ |
| { |
| "start": 637, |
| "end": 638, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Annotation Labels", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "(2) This sentence originates from an article depicting proposals on how to proceed with the statue of a certain historic personality and it contains the VIDs jmdm. den Kopf waschen ('wash someone's head'\u21d2'scold someone'), jmdm. auf den Zahn f\u00fchlen ('feel someone's tooth'\u21d2'interrogate someone') and jmdn. auf den Arm nehmen ('take someone on your arm'\u21d2'taunt someone' 5 ). The author of the sentence suggests to tear the statue down and to perform the aforementioned actions in an effort to demystify the person represented by the statue. The wordplay used here relies on the fact that all the VIDs relate to bodily actions and could be performed on a statue. Thus, both readings, literal and idiomatic, are active at the same time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Annotation Labels", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The annotation guidelines basically consisted of definitions of the applicable labels, coupled with examples. A condensed version of the definitions is given below:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Annotation Guidelines", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u2022 LITERAL: In the context of this annotation task we equate literality with compositionality. We understand compositionality as the property that the semantics of an expression is determined by the most basic meanings of its components without any form of figuration involved.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Annotation Guidelines", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u2022 IDIOMATIC: According to Baldwin and Kim (2010) 6 there are different forms of idiomaticity: lexical, syntactic, semantic, pragmatic and statistical. In the context of this annotation task, \"idiomatic\" is used synonymously with \"semantically idiomatic\", i.e. the property of an expression that it is not possible to fully derive its meaning by only considering the semantics of its components. Thus we understand semantic idiomaticity as a lack of compositionality.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Annotation Guidelines", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u2022 UNDECIDABLE: This label is for cases in which it is not possible to decide whether the target expression is literal or idiomatic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Annotation Guidelines", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u2022 BOTH: While the label UNDECIDABLE means that there is only one possible reading, but it's not feasible to decide which, the label BOTH denotes the phenomenon of the two readings being activated at the same time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Annotation Guidelines", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The annotation task then consisted of applying one of the labels to each candidate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Annotation Guidelines", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The annotation was performed by three trained linguists on the whole dataset. The annotation results are summarized in Table 1 . Columns 2 to 5 contain the counts of the majority decisions for the different labels, while column 6 contains the idiomaticity rate of a VID type. Figure 1 shows an example for an instance of the VID type die Notbremse ziehen ('pull the emergency breaks'\u21d2'quickly terminate a process') 7 in the column format of the corpus. The # global.columns = ID FORM LEMMA POS ANNO_1 ANNO_2 ANNO_3 MAJORITY_ANNO # article_id = T890825.128 # text = Bundesbahn will die Notbremse ziehen # context_judgement_1 = 0 # context_judgement_2 = 0 1 Bundesbahn Bundesbahn NN * * * * 2 will wollen VMFIN * * * * 3 die die ART * * * * 4 Notbremse Notbremse NN 2 2 2 2 5 ziehen ziehen VVINF 2 2 2 2 Figure 1 : A sample idiomatic instance in COLF-VID last four columns contain the annotations: columns 5 to 7 are the annotations of the three different annotators, the last column contains the majority annotation. Since all the annotators agreed that the reading of this instance is idiomatic (2 stands for the tag IDIOMATIC), this is an example for a clearcut decision. In the rare cases where there was a split decision and every annotator chose a different label, the label UNDECIDABLE was employed.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 119, |
| "end": 126, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 276, |
| "end": 284, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 802, |
| "end": 810, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Annotation Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "What immediately stands out is that the overall idiomaticity rate is not nearly as high as the 98% reported for the German PARSEME dataset mentioned in Section 3.1 It ranges from 19.44% (im Blut haben 'be in one's blood') to 99.65% 8 (den Nerv treffen) and is 77.55% in total. But one has to keep in mind that these two datasets are hardly comparable regarding their statistics, since COLF-VID was created with the intention to maximize the number of literal occurrences by only choosing VID types with a presumably high literality count. Even though there are some VID types with an unexpectedly high idiomaticity rate (auf der Strecke bleiben, in eine Sackgasse geraten or\u00fcber die B\u00fchne gehen to name a few), the large majority of the chosen VID types is indeed represented with a relatively low idiomaticity rate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Only 0.59 of the instances received the labels UNDECIDABLE or BOTH (see Figure 2 ), but this is hardly surprising. We nevertheless wanted to include these tags for the sake of completeness and linguistic interest.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 72, |
| "end": 80, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Annotation Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "For the three annotators we calculated the following Cohen's Kappa scores on the basis of the whole dataset:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 annotator 1 -annotator 2: 0.9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 annotator 2 -annotator 3: 0.8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 annotator 1 -annotator 3: 0.77 Thus, the agreement is high for all three annotators, which is expected given the nature of the task and the equally high agreement scores reported for comparable corpora (Cook et al. (2008) Another feature of COLF-VID is the context judgement provided by two of the annotators. These judgements can be seen in Figure 1 in the last two lines (starting with a hash tag) before the beginning of the sentence. They indicate whether the annotators needed more than one sentence to determine the reading of an instance. The two zeros denote that this was not the case for this candidate expression (\"1\" would indicate the opposite). Even if the sentence is rather short with only five words, the fact that the pulling of an emergency break requires an animate agent if used literally was enough information for both annotators to make their decisions. The context judgement feature provides the possibility of excluding candidates where none of the annotators was able to determine the reading only from a single sentence. As a result, instances where one sentence is not sufficient to make an informed decision would be prevented from entering a given system (e.g. a classifier which aims to disambiguate the candidates). from identification, where all the VID occurrences are to be identified in a sentence, e.g. by applying a sequential model to label every token as a VID component or not. The reason for this is that COLF -for now -is a lexical sample corpus, which means it consists of a pre-selected set of target expressions annotated with respect to their contexts. In other words, the sentences could contain non-annotated instances of VID types that weren't part of the preselected set, which in turn could confuse the system during training and skew the evaluation results (we will further address this issue in section 6.)", |
| "cite_spans": [ |
| { |
| "start": 204, |
| "end": 223, |
| "text": "(Cook et al. (2008)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 344, |
| "end": 352, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Annotation Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Thus, we modeled the task assuming another process had pre-identified the candidate expressions, which is the usual approach when it comes to the disambiguation of VIDs (Constant et al., 2017) . The classifier then only has to decide which label to apply given a certain instance and its context. This means that, although all components of a VID instance received a label during annotation 9 (cf. Figure 1 ), during classification we conflated all labels of a VID instance into one label for the whole expression. This is possible, since we did not allow for components of an instance to have different labels. For example, the verb cannot be literal while the noun is idiomatic.", |
| "cite_spans": [ |
| { |
| "start": 169, |
| "end": 192, |
| "text": "(Constant et al., 2017)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 398, |
| "end": 406, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "VID Disambiguation Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Word Representations During the experiments we employed word representations that were pretrained on other, considerably larger corpora with three different models: Word2vec (Skip-gram) (Mikolov et al., 2013) , fastText (CBOW) (Bojanowski et al., 2016) and ELMo (Embeddings from Language Models) (Peters et al., 2018) . We trained the Word2vec embeddings ourselves 10 on 9 In order to allow for a different kind of task at a later point. 10 We used the word2vec implementation of the python package gensim (\u0158eh\u016f\u0159ek and Sojka, 2010). a variant of the German web corpus DECOW16 (Sch\u00e4fer and Bildhauer, 2012) which consists of 11 billion tokens and shuffled sentences. The resulting vectors have 100 dimensions. As for the other models we reverted to already existing resources. The fastText embeddings were trained on Common Crawl and Wikipedia with a dimensionality of 300 11 . The German ELMo model was trained on a special Wikipedia corpus that also included the comments besides the articles (May, 2019) 12 . The underlying bidirectional language model provided us with 3 different word representations of size 1024 for each input token. These were averaged to give us one embedding per token.", |
| "cite_spans": [ |
| { |
| "start": 186, |
| "end": 208, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 227, |
| "end": 252, |
| "text": "(Bojanowski et al., 2016)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 296, |
| "end": 317, |
| "text": "(Peters et al., 2018)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 371, |
| "end": 372, |
| "text": "9", |
| "ref_id": null |
| }, |
| { |
| "start": 438, |
| "end": 440, |
| "text": "10", |
| "ref_id": null |
| }, |
| { |
| "start": 576, |
| "end": 605, |
| "text": "(Sch\u00e4fer and Bildhauer, 2012)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 994, |
| "end": 1005, |
| "text": "(May, 2019)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "VID Disambiguation Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Architecture There are different properties on the morphosyntactic and semantic level we can leverage during the disambiguation process. E.g. some VIDs do not possess the same lexical or morphological flexibility as their literal counterparts. The VID kick the bucket, for instance, does not allow for bucket to be replaced by a synonym like pail or for it to be in plural form, hence both would be strong indicators for literality. On the semantic level the surrounding context can of course give clues about the correct readings. An observation made during annotation was that, over and over again, the violation of selectional preferences gave a strong indication on how to annotate a candidate. For example in a sentence like Berlin holds its breath, Berlin is no animate subject which immediately gives away the non-literal nature of the sentence. This is why we settled for a classifier architecture that is best suited for taking the context into account. Figure 3 shows a graph of our architecture.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 963, |
| "end": 971, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "VID Disambiguation Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For an input sentence s of length n with words w 1 , ..., w n we associate every word w i with its corresponding pretrained word embedding which gives us our input sequence of vectors x 1:n :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VID Disambiguation Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "x i = e(w i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VID Disambiguation Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In the case we use Word2vec embeddings, a sequence w 1:n consists of lemmas, while for fastText it consists of tokens, because the former model was trained on lemmas and the latter on n-grams.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VID Disambiguation Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "After the embedding assignment the sequence x 1:n is fed into a bidirectional recurrent neural net", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VID Disambiguation Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "emb Das LST M LST M concat V Das emb Konzert LST M LST M concat V Konzert emb fiel LST M LST M concat V fiel emb ins LST M LST M concat V ins emb Wasser LST M LST M concat V Wasser", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VID Disambiguation Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "(Score Literal , Score Idiomatic , Score U ndecidable , Score Both ) Figure 3 : Architecture of the neural model with LSTM (Hochreiter and Schmidhuber, 1997) units (BiLSTM) in order to receive contextualized", |
| "cite_spans": [ |
| { |
| "start": 123, |
| "end": 157, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 69, |
| "end": 77, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "MLP", |
| "sec_num": null |
| }, |
| { |
| "text": "representations v i of each input element w i : v i = LSTM \u03b8 F (x 1:n , i) \u2022 LSTM \u03b8 B (x 1:n , i)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MLP", |
| "sec_num": null |
| }, |
| { |
| "text": "The contextualized representation v i is the concatenation (denoted by \u2022) of the outputs computed by the forward (LSTM \u03b8 F ) and backward (LSTM \u03b8 B ) LSTM. Hence, v i ideally contains information about all the preceding and succeeding items.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MLP", |
| "sec_num": null |
| }, |
| { |
| "text": "We then take two of those vectors, namely those for the verb and noun of the potential VID 13 , concatenate them and feed the result into a multi-layer perceptron (MLP) to obtain the final scores:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MLP", |
| "sec_num": null |
| }, |
| { |
| "text": "SCORE (v i \u2022 v j ) = MLP (v i \u2022 v j )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MLP", |
| "sec_num": null |
| }, |
| { |
| "text": "where v i and v j are the contextualized representations of the verb and the noun of the potential VID, respectively. We did not include prepositions into the input for the final scoring, because some expressions in COLF come without a lexicalized preposition (even though most do).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MLP", |
| "sec_num": null |
| }, |
| { |
| "text": "Till now, we only considered Word2vec and fast-Text embeddings as inputs. However, for ELMo things are a bit different on the input level. While Word2vec and fastText are functions that map each word to exactly one embedding, ELMo assigns different embeddings to the same word, depending on its context: 13 Remember, we assume for this task that another process already has identified the candidate expressions.", |
| "cite_spans": [ |
| { |
| "start": 304, |
| "end": 306, |
| "text": "13", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MLP", |
| "sec_num": null |
| }, |
| { |
| "text": "x i = ELMo(w 1:n , i)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MLP", |
| "sec_num": null |
| }, |
| { |
| "text": "This means, we introduce context already at the very beginning, which we assume is a great advantage for the system, since the components of the candidates receive different vectors depending on their context. E.g. during the classification process with Word2vec or fastText embeddings, the word ice in the sentences The weight of the ship broke the ice and With a joke he broke the ice would receive the same vector, while ELMo should assign them different representations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MLP", |
| "sec_num": null |
| }, |
| { |
| "text": "We split the COLF-VID dataset into train (70%), validation (15%) and test (15%) data. During the split we had to consider the high variance of the number of instances per VID type as to make sure that every split mirrors the distribution of types in the original data. E.g. am Boden liegen (48 instances) and auf dem Tisch liegen (951 instances) are represented with the same ratio in all three data sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and Hyperparameters", |
| "sec_num": null |
| }, |
| { |
| "text": "The objective of the training was to minimize the cross entropy loss and for optimization we used the gradient descent variant Adam with a learning rate of 0.01. As for the labels we chose the majority annotation. We trained the models for 15 (Word2vec, fastText) respectively 18 (ELMo) epochs with a batch size of 30. The input size of our models was dependent on the dimensionality of the pretrained embeddings which had 100 (Word2vec), 300 (fastText) and 1024 (ELMo) dimensions. The forward and backward LSTMs were one-layered and the size of the hidden state was 100 for all three models, despite the considerable difference in input sizes which could have warranted testing larger hidden states for larger embeddings. But we refrained from doing so to keep the numbers of parameters in the MLP constant and thereby the model computationally less expensive. Hence, the MLP itself had an input size of 400 for all models, coupled with a hidden layer of size 100 and an output layer of size 4. The implementations of the three models are available on GitHub. 14", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and Hyperparameters", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section we will present the results of our experiments on the disambiguation of German VIDs in context (see Table 2 ). We report precision, recall and F1-score for the two classes with the most instances -IDIOMATIC and LITERAL -as well as the weighted macro-average for all classes combined. Since there was such a low number of instances with the labels UNDECIDABLE and BOTH for the system to train on (only 28 in the train set), it did not do well on those classes which it always misclassified. In order to account for this stark imbalance in classes, we settled for the weighted macro average instead of the normal macro average and did not include detailed (precision/recall/F1) scores for the two low-number classes.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 116, |
| "end": 123, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Overall Results As a baseline we chose a simple majority classifier which already represents a nontrivial hurdle, because of the high idiomaticity rate of COLF-VID. Still, with respect to the F1-score, our system clears it with all three different input types and shows some considerable improvements. Furthermore, as was our hypothesis, the fastText embeddings were an enhancement over Word2vec, which in turn were bested by ELMo. Table 2 shows the increased performance across both classes for the validation and the test set. The highest F1-score on the validation (89.14) and the test (89.82) set were achieved when using ELMo embeddings.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 432, |
| "end": 439, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We suspect the superiority of fastText and ELMo over Word2vec lies in the fact that the two former models incorporate subword information. This should allow the classifier to detect morphosyntactic features that give clues on the correct reading of an expression, e.g. when it encounters a form of inflection unusual for a VID which tends to be morphosyntactically fixed. This is something our Word2vec model cannot accomplish, since it was trained on lemmas. Also, it would have been surprising if ELMo's ability to handle polysemy would not have been an advantage in a disambiguation task. This way context is already introduced at the input level.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "One apparent weakness of our system is its weaker performance on the LITERAL class in comparison to the IDIOMATIC class -hardly a surprise when considering the unbalanced distribution of labels. Still, a maximum F1-score of 79.07 for LIT-ERAL shows that our efforts to keep the idiomaticity rate of COLF-VID low bear some fruit. Table 3 shows a more fine-grained evaluation of the best performing system by listing the results per VID on the test set. The classifier achieves its best results (100.00 F1score) for an Glanz verlieren, an Land ziehen, am Pranger stehen, im Blut haben, in eine Sackgasse geraten, im Schatten stehen, in Schieflage geraten and einen Nerv treffen. That was to be expected, since all these VIDs have a high rate of idiomatic or literal readings -a fact the classifier very likely learnt during training, thus assigning a higher probability to the majority label. Nonetheless, even for those VID types it does not seem to mindlessly apply one label all the time. E.g. for an Land ziehen and im Blut haben, it correctly classifies the relatively few instances of their respective minority class.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 329, |
| "end": 336, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Still, arguably the most interesting VID types with respect to the disambiguation task are those with a (relatively speaking) more balanced distribution of classes, like auf der Stra\u00dfe stehen, auf dem Tisch liegen, eine Br\u00fccke bauen, in den Keller gehen, im Regen stehen ins Wasser fallen, Luft holen, eine Rechnung begleichen, von Bord gehen, vor der T\u00fcr stehen, ein Zelt aufschlagen or\u00fcber Bord gehen, all of which have idiomaticity rates between 38.82% and 79.68%. For all but four of those expressions, the system achieves F1-scores between 82.54 and 94.45. For ein Zelt aufschlagen (65.08), von Bord gehen (70.24), Luft holen (75.11) and eine Rechnung begleichen (78.95), the F1-scores are below 80. It would be interesting to investigate whether the difference in performance for the various VID types correlates with the interannotator agreement (IAA). We leave this question to future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VID-specific Evaluation", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper we presented COLF-VID, a new corpus with annotated instances of German VIDs and their literal counterparts. Furthermore, we experimented with VID disambiguation on the new corpus and showed that significant improvements can be gained from applying a neural architecture in comparison with a simple majority baseline. The experiments additionally demonstrated the effects of the different word representations on the resulting performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion/Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "For the future we plan on extending the annotation of COLF-VID with those VIDs that were not in the set of pre-chosen expressions and con-sequently were not annotated. This would allow to use the corpus as a basis for an identification task and not just disambiguation. Concerning the disambiguation task itself, a cornucopia of different approaches -be it supervised or unsupervised -can be imagined. We plan on conducting a survey of different approaches in an attempt to reveal which architectures, context sizes and features are best suited for the task. Last but not least, crosslinguistic experiments with comparable corpora (e.g. IDIX) could be interesting in order to explore language-specific properties of VIDs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion/Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "https://github.com/rafehr/COLF-VID", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Kappa scores and idiomaticity rates were reported independently for each expression.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://hdl.handle.net/11858/ 00-1778-0000-0007-5E99-D", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Pull out one's hair would be the English equivalent, but very seldomly, if not for huge emotional distress, people actually pull out their hair when upset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The literal meaning of jmdn. auf den Arm nehmen would be 'pick someone up'. A translation to English that keeps reference to a corresponding bodily action would be to pull someone's leg.6 The annotators were required to readBaldwin and Kim (2010) prior to the annotation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Translation: \"The federal railway wants to pull the emergency breaks\". The combination of federal railway and pull emergency breaks is very frequent in COLF-VID for obvious reasons.8 Am Pranger stehen 'stand in the pillory' has an idiomaticity rate of 100%, but its 5 candidates might not be that representative.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://fasttext.cc/docs/en/ crawl-vectors.html12 https://github.com/ t-systems-on-site-services-gmbh/ german-elmo-model", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/rafehr/ colf-bilstm-classifier", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank Julia Fischer and Kevin Pochwyt for their annotations. We also would like to thank the anonymous reviewers for their helpful reviews and suggestions. This work has been supported by the Deutsche Forschungsgemeinschaft (DFG) within the CRC 991 \"The Structure of Representations in Language, Cognition, and Science\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Multiword expressions. Handbook of natural language processing", |
| "authors": [ |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| }, |
| { |
| "first": "Su", |
| "middle": [ |
| "Nam" |
| ], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "267--292", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Timothy Baldwin and Su Nam Kim. 2010. Multiword expressions. Handbook of natural language process- ing, 2:267-292.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Enriching word vectors with subword information", |
| "authors": [ |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Bojanowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "Armand", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1607.04606" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vec- tors with subword information. arXiv preprint arXiv:1607.04606.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Survey: Multiword expression processing: A Survey", |
| "authors": [ |
| { |
| "first": "Mathieu", |
| "middle": [], |
| "last": "Constant", |
| "suffix": "" |
| }, |
| { |
| "first": "G\u00fcl\u015fen", |
| "middle": [], |
| "last": "Eryi\u01e7it", |
| "suffix": "" |
| }, |
| { |
| "first": "Johanna", |
| "middle": [], |
| "last": "Monti", |
| "suffix": "" |
| }, |
| { |
| "first": "Lonneke", |
| "middle": [], |
| "last": "Van Der", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Plas", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Ramisch", |
| "suffix": "" |
| }, |
| { |
| "first": "Amalia", |
| "middle": [], |
| "last": "Rosner", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Todirascu", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Computational Linguistics", |
| "volume": "43", |
| "issue": "4", |
| "pages": "837--892", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mathieu Constant, G\u00fcl\u015fen Eryi\u01e7it, Johanna Monti, Lonneke van der Plas, Carlos Ramisch, Michael Rosner, and Amalia Todirascu. 2017. Survey: Mul- tiword expression processing: A Survey. Computa- tional Linguistics, 43(4):837-892.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Pulling their weight: Exploiting syntactic forms for the automatic identification of idiomatic expressions in context", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Cook", |
| "suffix": "" |
| }, |
| { |
| "first": "Afsaneh", |
| "middle": [], |
| "last": "Fazly", |
| "suffix": "" |
| }, |
| { |
| "first": "Suzanne", |
| "middle": [], |
| "last": "Stevenson", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Workshop on A Broader Perspective on Multiword Expressions", |
| "volume": "", |
| "issue": "", |
| "pages": "41--48", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Cook, Afsaneh Fazly, and Suzanne Stevenson. 2007. Pulling their weight: Exploiting syntactic forms for the automatic identification of idiomatic expressions in context. In Proceedings of the Work- shop on A Broader Perspective on Multiword Expres- sions, pages 41-48, Prague, Czech Republic. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The vnc-tokens dataset", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Cook", |
| "suffix": "" |
| }, |
| { |
| "first": "Afsaneh", |
| "middle": [], |
| "last": "Fazly", |
| "suffix": "" |
| }, |
| { |
| "first": "Suzanne", |
| "middle": [], |
| "last": "Stevenson", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the LREC Workshop Towards a Shared Task for Multiword Expressions", |
| "volume": "", |
| "issue": "", |
| "pages": "19--22", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Cook, Afsaneh Fazly, and Suzanne Stevenson. 2008. The vnc-tokens dataset. In Proceedings of the LREC Workshop Towards a Shared Task for Mul- tiword Expressions (MWE 2008), pages 19-22.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Literal or idiomatic? identifying the reading of single occurrences of German multiword expressions using word embeddings", |
| "authors": [ |
| { |
| "first": "Rafael", |
| "middle": [], |
| "last": "Ehren", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the EACL Student Research Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "103--112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rafael Ehren. 2017. Literal or idiomatic? identifying the reading of single occurrences of German multi- word expressions using word embeddings. In Pro- ceedings of the EACL Student Research Workshop, pages 103-112, Valencia, Spain.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Unsupervised type and token identification of idiomatic expressions", |
| "authors": [ |
| { |
| "first": "Afsaneh", |
| "middle": [], |
| "last": "Fazly", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Cook", |
| "suffix": "" |
| }, |
| { |
| "first": "Suzanne", |
| "middle": [], |
| "last": "Stevenson", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Computational Linguistics", |
| "volume": "35", |
| "issue": "1", |
| "pages": "61--103", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Afsaneh Fazly, Paul Cook, and Suzanne Stevenson. 2009. Unsupervised type and token identification of idiomatic expressions. Computational Linguistics, 35(1):61-103.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A survey of idiomatic preposition-noun-verb triples on token level", |
| "authors": [ |
| { |
| "first": "Fabienne", |
| "middle": [], |
| "last": "Fritzinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Marion", |
| "middle": [], |
| "last": "Weller", |
| "suffix": "" |
| }, |
| { |
| "first": "Ulrich", |
| "middle": [], |
| "last": "Heid", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)", |
| "volume": "", |
| "issue": "", |
| "pages": "2908--2914", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fabienne Fritzinger, Marion Weller, and Ulrich Heid. 2010. A survey of idiomatic preposition-noun-verb triples on token level. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), pages 2908-2914, Val- letta, Malta. European Language Resources Associ- ation (ELRA).", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Detecting novel metaphor using selectional preference information", |
| "authors": [ |
| { |
| "first": "Hessel", |
| "middle": [], |
| "last": "Haagsma", |
| "suffix": "" |
| }, |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Bjerva", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Fourth Workshop on Metaphor in NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "10--17", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hessel Haagsma and Johannes Bjerva. 2016. Detect- ing novel metaphor using selectional preference in- formation. In Proceedings of the Fourth Workshop on Metaphor in NLP, pages 10-17.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Metaphor in academic discourse: Linguistic forms, conceptual structures, communicative functions and cognitive representations", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Julia", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Herrmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julia B. Herrmann. 2013. Metaphor in academic discourse: Linguistic forms, conceptual structures, communicative functions and cognitive representa- tions. Phd thesis, Vrije Universiteit Amsterdam, Amsterdam.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A corpus of literal and idiomatic uses of German infinitive-verb compounds", |
| "authors": [ |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Horbach", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Hensler", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Krome", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Prange", |
| "suffix": "" |
| }, |
| { |
| "first": "Werner", |
| "middle": [], |
| "last": "Scholze-Stubenrecht", |
| "suffix": "" |
| }, |
| { |
| "first": "Diana", |
| "middle": [], |
| "last": "Steffen", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Thater", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Wellner", |
| "suffix": "" |
| }, |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Pinkal", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", |
| "volume": "", |
| "issue": "", |
| "pages": "836--841", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrea Horbach, Andrea Hensler, Sabine Krome, Jakob Prange, Werner Scholze-Stubenrecht, Diana Steffen, Stefan Thater, Christian Wellner, and Man- fred Pinkal. 2016. A corpus of literal and idiomatic uses of German infinitive-verb compounds. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 836-841, Portoro\u017e, Slovenia. European Lan- guage Resources Association (ELRA).", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Automatic identification of non-compositional multiword expressions using latent semantic analysis", |
| "authors": [ |
| { |
| "first": "Graham", |
| "middle": [], |
| "last": "Katz", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugenie", |
| "middle": [], |
| "last": "Giesbrecht", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Workshop on Multiword Expressions: Identifying and Exploiting Underlying Properties", |
| "volume": "", |
| "issue": "", |
| "pages": "12--19", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Graham Katz and Eugenie Giesbrecht. 2006. Au- tomatic identification of non-compositional multi- word expressions using latent semantic analysis. In Proceedings of the Workshop on Multiword Expres- sions: Identifying and Exploiting Underlying Prop- erties, pages 12-19, Sydney, Australia. ACL.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A cohesion graph based approach for unsupervised recognition of literal and non-literal use of multiword expressions", |
| "authors": [ |
| { |
| "first": "Linlin", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Caroline", |
| "middle": [], |
| "last": "Sporleder", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing (TextGraphs-4)", |
| "volume": "", |
| "issue": "", |
| "pages": "75--83", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Linlin Li and Caroline Sporleder. 2009. A cohesion graph based approach for unsupervised recognition of literal and non-literal use of multiword expres- sions. In Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Pro- cessing (TextGraphs-4), pages 75-83, Suntec, Sin- gapore.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Deep contextualized word representations", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Peters", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Neumann", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Iyyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Gardner", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "2227--2237", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N18-1202" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "MIP: A method for identifying metaphorically used words in discourse", |
| "authors": [ |
| { |
| "first": "Pragglejaz", |
| "middle": [], |
| "last": "Group", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Metaphor and Symbol", |
| "volume": "22", |
| "issue": "1", |
| "pages": "1--39", |
| "other_ids": { |
| "DOI": [ |
| "10.1080/10926480709336752" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pragglejaz Group. 2007. MIP: A method for iden- tifying metaphorically used words in discourse. Metaphor and Symbol, 22(1):1-39.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Edition 1.1 of the PARSEME shared task on automatic identification of Verbal Multiword Expressions", |
| "authors": [ |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Ramisch", |
| "suffix": "" |
| }, |
| { |
| "first": "Silvio", |
| "middle": [ |
| "Ricardo" |
| ], |
| "last": "Cordeiro", |
| "suffix": "" |
| }, |
| { |
| "first": "Agata", |
| "middle": [], |
| "last": "Savary", |
| "suffix": "" |
| }, |
| { |
| "first": "Veronika", |
| "middle": [], |
| "last": "Vincze", |
| "suffix": "" |
| }, |
| { |
| "first": "Behrang", |
| "middle": [], |
| "last": "Qasemizadeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie", |
| "middle": [], |
| "last": "Candito", |
| "suffix": "" |
| }, |
| { |
| "first": "Voula", |
| "middle": [], |
| "last": "Verginica Barbu Mititelu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivelina", |
| "middle": [], |
| "last": "Giouli", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Stoyanova", |
| "suffix": "" |
| }, |
| { |
| "first": "Timm", |
| "middle": [], |
| "last": "Schneider", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lichte", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of LAW-MWE-CxG", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carlos Ramisch, Silvio Ricardo Cordeiro, Agata Savary, Veronika Vincze, Behrang QasemiZadeh, Marie Candito, Verginica Barbu Mititelu, Voula Giouli, Ivelina Stoyanova, Nathan Schneider, and Timm Lichte. 2018. Edition 1.1 of the PARSEME shared task on automatic identification of Verbal Multiword Expressions. In Proceedings of LAW- MWE-CxG 2018, Santa Fe, USA.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Software Framework for Topic Modelling with Large Corpora", |
| "authors": [ |
| { |
| "first": "Petr", |
| "middle": [], |
| "last": "Radim\u0159eh\u016f\u0159ek", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sojka", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", |
| "volume": "", |
| "issue": "", |
| "pages": "45--50", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Cor- pora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45- 50, Valletta, Malta. ELRA. http://is.muni.cz/ publication/884893/en.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Literal occurrences of multiword expressions: Rare birds that cause a stir", |
| "authors": [ |
| { |
| "first": "Agata", |
| "middle": [], |
| "last": "Savary", |
| "suffix": "" |
| }, |
| { |
| "first": "Silvio", |
| "middle": [ |
| "Ricardo" |
| ], |
| "last": "Cordeiro", |
| "suffix": "" |
| }, |
| { |
| "first": "Timm", |
| "middle": [], |
| "last": "Lichte", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Ramisch", |
| "suffix": "" |
| }, |
| { |
| "first": "Uxoa", |
| "middle": [], |
| "last": "I\u00f1urrieta", |
| "suffix": "" |
| }, |
| { |
| "first": "Voula", |
| "middle": [], |
| "last": "Giouli", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "The Prague Bulletin of Mathematical Linguistics", |
| "volume": "112", |
| "issue": "1", |
| "pages": "5--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Agata Savary, Silvio Ricardo Cordeiro, Timm Lichte, Carlos Ramisch, Uxoa I\u00f1urrieta, and Voula Giouli. 2019. Literal occurrences of multiword expressions: Rare birds that cause a stir. The Prague Bulletin of Mathematical Linguistics, 112(1):5-54.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Building large corpora from the web using a new efficient tool chain", |
| "authors": [ |
| { |
| "first": "Roland", |
| "middle": [], |
| "last": "Sch\u00e4fer", |
| "suffix": "" |
| }, |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Bildhauer", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012)", |
| "volume": "", |
| "issue": "", |
| "pages": "12--1497", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roland Sch\u00e4fer and Felix Bildhauer. 2012. Building large corpora from the web using a new efficient tool chain. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 486-493, Istanbul, Turkey. Eu- ropean Language Resources Association (ELRA). ACL Anthology Identifier: L12-1497.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Metaphor identification using verb and noun clustering", |
| "authors": [ |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Shutova", |
| "suffix": "" |
| }, |
| { |
| "first": "Lin", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1002--1010", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ekaterina Shutova, Lin Sun, and Anna Korhonen. 2010. Metaphor identification using verb and noun cluster- ing. In Proceedings of the 23rd International Con- ference on Computational Linguistics, pages 1002- 1010. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Idioms in context: The IDIX corpus", |
| "authors": [ |
| { |
| "first": "Caroline", |
| "middle": [], |
| "last": "Sporleder", |
| "suffix": "" |
| }, |
| { |
| "first": "Linlin", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Gorinski", |
| "suffix": "" |
| }, |
| { |
| "first": "Xaver", |
| "middle": [], |
| "last": "Koch", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Caroline Sporleder, Linlin Li, Philip Gorinski, and Xaver Koch. 2010. Idioms in context: The IDIX corpus. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Re- sources Association (ELRA).", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "A Method for Linguistic Metaphor Identification. Number 14 in Converging Evidence in Language and Communication Research", |
| "authors": [ |
| { |
| "first": "Gerard", |
| "middle": [ |
| "J" |
| ], |
| "last": "Steen", |
| "suffix": "" |
| }, |
| { |
| "first": "Aletta", |
| "middle": [ |
| "G" |
| ], |
| "last": "Dorst", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "Berenike" |
| ], |
| "last": "Herrmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Kaal", |
| "suffix": "" |
| }, |
| { |
| "first": "Tina", |
| "middle": [], |
| "last": "Krennmayr", |
| "suffix": "" |
| }, |
| { |
| "first": "Trijntje", |
| "middle": [], |
| "last": "Pasma", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "John Benjamins", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1075/celcr.14" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gerard J. Steen, Aletta G. Dorst, J. Berenike Herrmann, Anna Kaal, Tina Krennmayr, and Trijntje Pasma. 2010. A Method for Linguistic Metaphor Identi- fication. Number 14 in Converging Evidence in Language and Communication Research. John Ben- jamins, Amsterdam, The Netherlands.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": ", Sporleder et al. (2010), Fritzinger et al. (2010))." |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "SetupThe Task The goal of the presented experiments is to train a classifier capable of distinguishing the different readings of a candidate expression. It is important to emphasize that this task is different Distribution of annotation labels in COLF-VID" |
| }, |
| "TABREF1": { |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "text": "", |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "text": "Evaluation results (weighted macro) per VID on the test set.", |
| "type_str": "table" |
| } |
| } |
| } |
| } |