| { |
| "paper_id": "W19-0304", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T06:32:10.157295Z" |
| }, |
| "title": "A Contrastive Evaluation of Word Sense Disambiguation Systems for Finnish", |
| "authors": [ |
| { |
| "first": "Frankie", |
| "middle": [], |
| "last": "Robertson", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Jyv\u00e4skyl\u00e4", |
| "location": {} |
| }, |
| "email": "frankie.r.robertson@student.jyu.fi" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Previous work in Word Sense Disambiguation (WSD), like many tasks in natural language processing, has been predominantly focused on English. While there has been some work on other languages, including Uralic languages, up until this point no work has been published providing a contrastive evaluation of WSD for Finnish, despite the requisite lexical resources, most notably FinnWord-Net, having long been in place. This work rectifies the situation. It gives results for systems representing the major approaches to WSD, including some of the systems which have performed best at the task for English. It is hoped these results can act as a baseline for future systems, including both multilingual systems and systems specifically targeting Finnish, as well as point to directions for other Uralic languages. Tiivistelm\u00e4 Aiempi saneiden alamerkitysten yksiselitteist\u00e4mist\u00e4 k\u00e4sittelev\u00e4 ty\u00f6, kuten monet muut luonnollisen kielen k\u00e4sittelyyn liittyv\u00e4t teht\u00e4v\u00e4t, on enimm\u00e4kseen keskittynyt englannin kieleen. Vaikka hieman ty\u00f6t\u00e4 on tehty my\u00f6s muilla kielill\u00e4, mukaan lukien uralilaiset kielet, vertailevaa arviointia suomen kielen saneiden alamerkitysten yksiselitteist\u00e4misest\u00e4 ei ole t\u00e4h\u00e4n menness\u00e4 julkaistu huolimatta siit\u00e4, ett\u00e4 tarvittavat leksikaaliset resurssit, erityisesti FinnWordNet, ovat jo pitk\u00e4\u00e4n olleet saatavilla. T\u00e4m\u00e4 ty\u00f6 pyrkii korjaamaan tilanteen. Se tarjoaa tuloksia merkitt\u00e4vimpi\u00e4 l\u00e4hestymistapoja saneiden alamerkitysten yksiselitteist\u00e4miseen edustavista ohjelmista, sis\u00e4lt\u00e4en joitakin parhaiten englanninkielell\u00e4 samasta teht\u00e4v\u00e4st\u00e4 suoriutuvia ohjelmia. N\u00e4iden tulosten toivotaan voivan toimia l\u00e4ht\u00f6kohtana tuleville, sek\u00e4 monikielisille ett\u00e4 erityisesti suomen kieleen kohdentuville, ohjelmille ja tarjota suuntaviivoja muihin uralilaisiin kieliin keskittyv\u00e4\u00e4n ty\u00f6h\u00f6n.", |
| "pdf_parse": { |
| "paper_id": "W19-0304", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Previous work in Word Sense Disambiguation (WSD), like many tasks in natural language processing, has been predominantly focused on English. While there has been some work on other languages, including Uralic languages, up until this point no work has been published providing a contrastive evaluation of WSD for Finnish, despite the requisite lexical resources, most notably FinnWord-Net, having long been in place. This work rectifies the situation. It gives results for systems representing the major approaches to WSD, including some of the systems which have performed best at the task for English. It is hoped these results can act as a baseline for future systems, including both multilingual systems and systems specifically targeting Finnish, as well as point to directions for other Uralic languages. Tiivistelm\u00e4 Aiempi saneiden alamerkitysten yksiselitteist\u00e4mist\u00e4 k\u00e4sittelev\u00e4 ty\u00f6, kuten monet muut luonnollisen kielen k\u00e4sittelyyn liittyv\u00e4t teht\u00e4v\u00e4t, on enimm\u00e4kseen keskittynyt englannin kieleen. Vaikka hieman ty\u00f6t\u00e4 on tehty my\u00f6s muilla kielill\u00e4, mukaan lukien uralilaiset kielet, vertailevaa arviointia suomen kielen saneiden alamerkitysten yksiselitteist\u00e4misest\u00e4 ei ole t\u00e4h\u00e4n menness\u00e4 julkaistu huolimatta siit\u00e4, ett\u00e4 tarvittavat leksikaaliset resurssit, erityisesti FinnWordNet, ovat jo pitk\u00e4\u00e4n olleet saatavilla. T\u00e4m\u00e4 ty\u00f6 pyrkii korjaamaan tilanteen. Se tarjoaa tuloksia merkitt\u00e4vimpi\u00e4 l\u00e4hestymistapoja saneiden alamerkitysten yksiselitteist\u00e4miseen edustavista ohjelmista, sis\u00e4lt\u00e4en joitakin parhaiten englanninkielell\u00e4 samasta teht\u00e4v\u00e4st\u00e4 suoriutuvia ohjelmia. N\u00e4iden tulosten toivotaan voivan toimia l\u00e4ht\u00f6kohtana tuleville, sek\u00e4 monikielisille ett\u00e4 erityisesti suomen kieleen kohdentuville, ohjelmille ja tarjota suuntaviivoja muihin uralilaisiin kieliin keskittyv\u00e4\u00e4n ty\u00f6h\u00f6n.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Like many natural language understanding tasks, Word Sense Disambiguation (WSD) has been referred to as AI-complete (Mallery, 1988, p. 57) . That is to say, it is considered as hard as the central problems in artificial intelligence, such as passing the Turing test (Turing, 1950) . While in the general case this may be true, the best current systems can at least do better than the (quite tough to beat) Most Frequent Sense (MFS) baseline. Evaluations against common datasets and dictionaries, largely following procedures set out by the shared tasks under the auspices of the SensEval and SemEval workshops, have been key to creating measurable progress in WSD.", |
| "cite_spans": [ |
| { |
| "start": 116, |
| "end": 138, |
| "text": "(Mallery, 1988, p. 57)", |
| "ref_id": null |
| }, |
| { |
| "start": 266, |
| "end": 280, |
| "text": "(Turing, 1950)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For English, present a recent comparison of different WSD systems across harmonised SensEval and SemEval data sets. Within the Uralic languages, Kahusk et al. (2001) created a manually sense annotated corpus of Estonian so that it could be included in SensEval-2. Two systems based on supervised learning were submitted, presented by Yarowsky et al. (2001) and Vider and Kaljurand (2001) . Both systems failed to beat the MFS baseline (Edmonds, 2002, Table 1 ). For Hungarian, Mih\u00e1ltz (2010) created a sense tagged corpus by translating sense tagged data from English into Hungarian and then performed WSD with a number of supervised systems. Precision was compared with an MFS baseline, but the comparison was only given on a per-word basis. Up until this point, however, no work providing this type of a contrastive evaluation of WSD has been published for Finnish. This work rectifies the situation, giving results for systems representing the major approaches to WSD, including some of the systems which have performed best at the task for other languages.", |
| "cite_spans": [ |
| { |
| "start": 145, |
| "end": 165, |
| "text": "Kahusk et al. (2001)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 334, |
| "end": 356, |
| "text": "Yarowsky et al. (2001)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 361, |
| "end": 387, |
| "text": "Vider and Kaljurand (2001)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 435, |
| "end": 450, |
| "text": "(Edmonds, 2002,", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 451, |
| "end": 458, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The minimum resources required to conduct a WSD evaluation are a Lexical Knowledge Base (LKB) and an evaluation corpus. Supervised systems require additionally a training corpus. The current generation of NLP systems make copious usage of word embeddings as lexical resources, as do some of the systems evaluated here, and so these are also needed. Here, the FinnWordNet (FiWN) (Lind\u00e9n and Carlson, 2010) LKB is used, while both the evaluation and training corpus are based on the EuroSense corpus . The rest of this section describes these linguistic resources and their preparation in more depth.", |
| "cite_spans": [ |
| { |
| "start": 378, |
| "end": 404, |
| "text": "(Lind\u00e9n and Carlson, 2010)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Resources", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EuroSense ) is a multilingual sense tagged corpus, obtained by running the knowledge based Babelfy (Moro et al., 2014) WSD algorithm on multilingual texts. To use this corpus in a way which is compatible with the maximum number of systems and in line with the standards of previous evaluations, it first has to be preprocessed. The preprocessing pipeline is shown in Figure 1 .", |
| "cite_spans": [ |
| { |
| "start": 99, |
| "end": 118, |
| "text": "(Moro et al., 2014)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 367, |
| "end": 375, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Obtaining a Sense Tagged Corpus", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In the first stage, drop non-Finnish, all non Finnish text and annotations are removed from the stream. EuroSense is tagged with synsets from the BabelNet LKB (Navigli and Ponzetto, 2012) . This knowledge base is based on the WordNets of many languages enriched and modified according to other sources, such as Wikipedia and Wikitionary. However, here the LKB to be used is FinnWordNet. A mapping file was extracted from BabelNet using its Java API and a local copy, obtained through direct communication with its authors\u00b9. The Babelnet lookup stage applies this mapping. The stage will drop annotation which do not exist in FiWN according to the mapping. A BabelNet synset can also map to multiple FiWN synsets, and in this case an ambiguous annotation can be produced.", |
| "cite_spans": [ |
| { |
| "start": 159, |
| "end": 187, |
| "text": "(Navigli and Ponzetto, 2012)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Obtaining a Sense Tagged Corpus", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u00b9Made available at https://github.com/frankier/babelnet-lookup. The re-anchor and re-lemmatise stages clean up some problems with the grammatical analyses in EuroSense. EuroSense anchors sometimes include help words associated with certain verb conjugations, for example negative forms, e.g. \"ei mene\", or the perfect construction \"on k\u00e4ynyt\". Re-anchor removes these words from the anchor, taking care of the cases in which the whole anchor could actually refer to a lemma form in WordNet, e.g. \"olla merkityst\u00e4\". Re-lemmatise checks that the current lemma is associated with the annotated synsets in FiWN. In case there is no matching synsets, we look back at the surface form and check all possible lemmas obtained from OMorFi (Pirinen, 2015)\u00b2 for matches against FiWN. At this point, any annotations which do not have exactly one lemma and one synset which exist in FiWN are dropped. In the penultimate stage, remove empty, any sentences without any annotations are removed entirely. Finally, the XML format is converted from the back-off annotations of the EuroSense format to the inline annotations of the unified format of .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Obtaining a Sense Tagged Corpus", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The corpus is then split into testing and training sections. The testing corpus is made up of the first 1000 sentences, resulting in 4507 tagged instances. The resulting corpus is already sentence and word segmented. Additionally, the instance to be disambiguated is passed to each system with the correct lemma and part of speech tag, meaning the evaluation only tests the disambiguation stage of a full WSD pipeline and not the candidate extraction or POS tagging stage. The corpus is further processed with FinnPOS (Silfverberg et al., 2016) \u00b3 for systems that need POS tags and/or lemmas for the words in the context.", |
| "cite_spans": [ |
| { |
| "start": 518, |
| "end": 544, |
| "text": "(Silfverberg et al., 2016)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EuroSense", |
| "sec_num": null |
| }, |
| { |
| "text": "Many WSD techniques based on WordNet, including the typical implementation of the MFS baseline, assume it is possible to pick the most frequent sense of a lemma by picking the first sense. The reason this works with Princeton WordNet (PWN) (Miller et al., 1990 ) is because word senses are numbered according to the descending order of sense occurrence counts based on the part of the Brown corpus used during its creation\u2074. FinnWordNet senses on the other hand are randomly ordered.", |
| "cite_spans": [ |
| { |
| "start": 240, |
| "end": 260, |
| "text": "(Miller et al., 1990", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Enriching FinnWordNet with frequency data", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Since this data is potentially needed even by knowledge based systems, which should not have access to a training corpus, it is estimated here based on the frequency data in PWN. Unlike most PWN aligned WordNets, which are aligned at the synset level, FinnWordNet is aligned with PWN at the lemma level. An example of when this distinction takes effect is when lemmas are structurally similar. For example, in the synset \"singer, vocalist, vocalizer, vocaliser\", the Finnish lemma laulaja is mapped only to singer rather than to every lemma in the synset. When there is no clear distinction to be made, whole synsets are mapped. This reasoning fits with the existing structure of PWN: Relations between synsets encode purely semantic concerns, whereas relations between lemmas encode so-called morpho-semantic relationships, such as morphological derivation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Enriching FinnWordNet with frequency data", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Let the Finnish-English lemma mapping be denoted L, the specific frequency estimate for a Finnish lemma is then defined like so:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Enriching FinnWordNet with frequency data", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "freq(l fin ) = (l fin , leng)\u2208L freq(l eng ) (l fin2 , l eng ) \u2208 L", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Enriching FinnWordNet with frequency data", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u00b2https://github.com/flammie/omorfi \u00b3https://github.com/mpsilfve/FinnPos \u2074This data is overlapping with, but distinct from SemCor (Miller et al., 1993) . The rationale of this approach is that this causes the frequencies of English lemmas to be evenly distributed across all the Finnish lemmas which they map to.", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 150, |
| "text": "(Miller et al., 1993)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Enriching FinnWordNet with frequency data", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "To integrate the resulting synthetic frequency data into as many applications as possible, it is made available in the WordNet format\u2075. The WordNet format requires sense occurrence counts, meaning the frequency data must be converted to integer values. To perform this conversion all frequencies are multiplied by the lowest common multiple of the divisors in the above formula. Some care must be taken in downstream applications since the resulting counts are no longer true counts, but rescaled probabilities. The main consequence here is that systems which use +1 smoothing are reconfigured to use +1000 smoothing. Table 1 summarises the word embeddings used here. Due to the large number of word forms a Finnish lemma can take, it is of note here whether the word embedding represents word forms or lemmas. In the case an embedding represents word forms, it is additionally of note whether it uses any subword or character level information during its training, which should help to combat data sparsity. Despite the use of subword information, none of these embeddings can analyse out of vocabulary word forms. Cross-lingual word embeddings embed words from multiple languages in the same space, a property utilised in Section 3.2.2.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 618, |
| "end": 625, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Enriching FinnWordNet with frequency data", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "To extend word representations to sequences of words such as sentences, taking the arithmetic mean of word embeddings (AWE) has been commonly used as a baseline. Various incremental modifications have been suggested. R\u00fcckl\u00e9 et al. (2018) \u2075Made available at https://github.com/frankier/fiwn. \u1d43 See Table 3 \u1d47 See Table 4 suggest concatenating the vectors formed by multiple power means, including the arithmetic mean. Variants CATP3 and CATP4 are used here. The former is the concatenation of the minimum, arithmetic mean, and the maximum, while the latter contains also the 3rd power mean. Arora et al. (2017) proposed Smooth Inverse Frequency (SIF), by taking a weighted average according to a a+p(w) , where a is a parameter and p(w) is the probability of the word. Arora et al. (2017) perform common component removal on the resulting vector. In the variant used here, (referred to as pre-SIF) a is set to the suggested value of 10 \u22123 and common component removal is not performed, while p(w) is estimated based upon the word frequency data of Speer et al. (2018)\u2076.", |
| "cite_spans": [ |
| { |
| "start": 217, |
| "end": 237, |
| "text": "R\u00fcckl\u00e9 et al. (2018)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 589, |
| "end": 608, |
| "text": "Arora et al. (2017)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 767, |
| "end": 786, |
| "text": "Arora et al. (2017)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 297, |
| "end": 304, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 311, |
| "end": 318, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word embeddings", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "This evaluation is based on the all-words variant of the WSD task. In this task, the aim is to identify and disambiguate all words in some corpus. This is contrasted with the lexical sample approach, where a fixed set of words are chosen for evaluation. There are many systems and approaches which have been proposed for performing WSD. To select techniques for this evaluation, the following criteria were used:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Systems and Results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Prefer techniques which have been used in previous evaluations for English.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Systems and Results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Prefer techniques with existing open source code that can be adapted.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Systems and Results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Apart from this, include also simple schemes, especially if they represent an approach to WSD not covered otherwise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Systems and Results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The last criterion has led to the inclusion of multiple techniques based upon representation learning, where some representation of words or groups of words is learned in an unsupervised manner from a large corpus. To perform WSD based on these representations a relatively simple classifier, such as a nearest neighbour classifier, is then used. This approach to WSD additionally acts as a grounded extrinsic evaluation of the quality of the representations. The results of the evaluation are summarised in Table 2 , with variants of the Cross-lingual Lesk and AWE-NN systems broken down in Tables 3 and 4 . The rest of this section describes each of the systems in more detail.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 508, |
| "end": 515, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 592, |
| "end": 606, |
| "text": "Tables 3 and 4", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Systems and Results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We can define limits for the performance of the WSD systems. The floor is defined by the proportion of unambiguous test instances. It is the F 1 score obtained by a system which makes correct guesses for unambiguous instances and incorrect guesses for every other instance. The ceiling is for systems based upon supervised learning, and is the proportion of test instances for which the true sense exists in the training data. It is the F 1 score obtained by a system which correctly associated every item in the test data with the true class seen in the training data, and makes an incorrect guess for every other instance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baseline", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The random sense baseline picks a random sense by picking the first sense according to a version of FinnWordNet without the frequency data from Section 2.2 i.e. the original sense order in FinnWordNet is assumed to be random. This also gives us a rough estimate of the average ambiguity of the gold standard, 1 29.8% \u2248 3. The MFS baseline also picks the first sense, but uses the estimated frequencies from Section 2.2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baseline", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Knowledge based WSD systems use only information in the LKB. In almost all dictionary style resources, this can include the text of the definitions themselves. In WordNet style resources, this can include also the graphical structure of the LKB.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Knowledge based systems", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "UKB (Agirre et al., 2014 ) is a knowledge based system, representing the graph based approach to WSD. Since it works on the level of synsets, the main algorithm is essentially language independent, with the candidate extraction step being the main language dependent component. UKB can also make use of language specific word sense frequencies.", |
| "cite_spans": [ |
| { |
| "start": 4, |
| "end": 24, |
| "text": "(Agirre et al., 2014", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "UKB", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "As noted in Agirre et al. (2018) , depending on the particular configuration, it is easy to get a wide range of results using UKB. The configurations used here are based on the recommended configuration given by Agirre et al. (2018) . For all configurations, the ppr w2w algorithm is used, which runs personalised page rank for each target word. One notable configuration difference here is that the contexts passed to UKB are fixed to a single sentence. This is the same input as is given to the other systems in this evaluation. Variations with and without access to word sense frequency information are given, (freq & no freq) with the latter assumed to be similar to the configuration given in .", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 32, |
| "text": "Agirre et al. (2018)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 212, |
| "end": 232, |
| "text": "Agirre et al. (2018)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "UKB", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "By default, the lemmas and POS tags in the contexts given to UKB are from the sense tagged instances of EuroSense. Since some instances have been filtered from EuroSense so as to retain high precision, it may that UKB is hamstrung by an insufficient context size. To increase the information in the context without extending it beyond the sentence boundary, a high recall, low precision lemma extraction procedure based on OMorFi is performed. The procedure (referred to in Table 2 as extract) adds to the context all possible lemmas from each word form, including parts of compound words, and also extracts multiwords that are in FiWN.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 474, |
| "end": 481, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "UKB", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "A variant of Lesk, referred to hereafter as Lesk with cross-lingual word embeddings (Cross-lingual Lesk) is included to represent the gloss based approach to WSD. The variant presented here is loosely based upon Basile et al. (2014) . The technique is a derivative of simplified Lesk (Kilgarriff and Rosenzweig, 2000) in that words are disambiguated by comparing contexts and glosses. For each candidate definition, the word vectors of each word in the definition text are aggregated to obtain a definition vector. The word vectors of the words in the context of the word being disambiguated are also aggregated to obtain a context vector. Definitions are then ranked from best to worst in descending order of cosine similarity between their definition vector and the context vector. Frequency data (freq) can be incorporated by multiplying the obtained cosine similarities by the smoothed probabilities of the synset given the lemma. Since the words in the context are Finnish, but the words in the definitions are English, cross-lingual word vectors are required. The embeddings used are fastText, Numberbatch and the concatenation of both. Other variations are made by the choice of aggregation function, choosing whether or not to only include words which occur in FiWN, and whether glosses are expanded by adding also the glosses of related synsets. The gloss expansion procedure follows Banerjee and Pedersen (2002, Chapter 6) . The results are summarised in Table 3 .", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 232, |
| "text": "Basile et al. (2014)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 284, |
| "end": 317, |
| "text": "(Kilgarriff and Rosenzweig, 2000)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1393, |
| "end": 1432, |
| "text": "Banerjee and Pedersen (2002, Chapter 6)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1465, |
| "end": 1472, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lesk with cross-lingual word embeddings", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "Supervised WSD systems are based on supervised machine learning. Most typically in WSD a separate classifier is learned for each individual lemma.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised systems", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "SupWSD (Papandrea et al., 2017 ) is a supervised WSD system following the traditional paradigm of combining hand engineered features with a linear classifier, in this case a support vector machine. SupWSD is largely a reimplementation of It Makes Sense (Zhong and Ng, 2010) , and as such uses the same feature templates and its results should be largely comparable. It was chosen over It Makes Sense since it can handle larger corpora.", |
| "cite_spans": [ |
| { |
| "start": 7, |
| "end": 30, |
| "text": "(Papandrea et al., 2017", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 253, |
| "end": 273, |
| "text": "(Zhong and Ng, 2010)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SupWSD", |
| "sec_num": "3.3.1" |
| }, |
| { |
| "text": "All variants include the POS tag and local colocation feature templates, and the default configuration includes also the set of words in the sentence. Variants incorporating the most successful configuration of Iacobacci et al. (2016) , exponential decay averaging of word vectors with a window size of 10, are also included for each applicable word embedding from Section 2.3. For each configuration incorporating word vectors, variants without the set of words in the sentence are included, denoted e.g. Word2Vec\u208bs.", |
| "cite_spans": [ |
| { |
| "start": 211, |
| "end": 234, |
| "text": "Iacobacci et al. (2016)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SupWSD", |
| "sec_num": "3.3.1" |
| }, |
| { |
| "text": "Nearest neighbour using word embeddings has been used previously by Melamud et al. (2016) as a baseline. This system is very similar to the one outlined in Section 3.2.2. The main difference is that word senses are now represented by all memorised training instances, each themselves represented by the aggregation of word embeddings in their contexts. When a training instance is the nearest neighbour of a test instance, based on cosine distance, its tagged sense is applied to the test instance. This moves the technique from the realm of knowledge based WSD to supervised WSD. Since both tagged instances and the untagged context to be disambiguated are in Finnish, the constraint that word embeddings must be cross-lingual is removed. The results are summarised in Table 4 .", |
| "cite_spans": [ |
| { |
| "start": 68, |
| "end": 89, |
| "text": "Melamud et al. (2016)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 770, |
| "end": 777, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Nearest neighbour using word embeddings", |
| "sec_num": "3.3.2" |
| }, |
| { |
| "text": "This paper has presented the first comparative WSD evaluation for Finnish. In the results presented here, several systems beat the MFS baseline. Of the knowledge based systems, both UKB and some variants of cross-lingual Lesk incorporating frequency information managed to clear the baseline. All the supervised systems tested beat it by a 20% margin. For techniques incorporating aggregates of word vectors, CATP3 reliably outperformed a simple arithmetic mean across a variety of configurations. This evaluation may be limited by a number of issues. Multiple issues stem from the use of EuroSense. Due to the way it is automatically induced, it contains errors, making its use problematic, especially its use as a gold standard. First we model these errors as occurring in an essentially random manner. In this case a perfect WSD system would get a less than perfect score, and in fact the performance of all systems would be expected to decrease. It is worth noting that since inter-annotator agreement can be relatively low for word sense annotation, manual annotations can also be modelled as having this type of problem to some degree. Random errors in the training data would also cause the supervised systems to perform worse, however this does not effect the overall integrity of the evaluation. However, it is likely that EuroSense in fact contains systematic errors. One type of systematic error is an error of omission: EuroSense assigns senses to a subset of all possible candidate words, filtering out those which the Babelfy algorithm cannot assign sufficient confidence to, meaning that the gold standard may be missing words which are in some sense more difficult, artificially increasing the score of systems which would also have problems with these same words. Perhaps worse are systematic errors which bias certain lemmas within certain types of contexts to certain incorrect senses. In this case, supervised systems may seem to perform better, but only because they are essentially learning to replicate the systematic errors in EuroSense rather than because they are performing WSD more accurately.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion & Conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Another factor which may cause this evaluation to present too optimistic a picture of the performance of supervised systems is that the evaluation corpus and training corpus are from the same domain, parliamentary proceedings, which could result in an inflated score in comparison to an evaluation corpus from another domain. Finally, since the corpus is derived from EuroParl, the original language of most text is likely not Finnish. Particular features of translated language, sometimes referred to as translationese may affect the applicability of the results to non translated Finnish\u2077.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion & Conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Finally, the MFS baseline may have been handicapped in terms of its performance. On the one hand, the MFS baseline may be reasonably analagous with MFS baselines in WSD evaluations for other languages in that it is ultimately derived from frequency data which is out of domain. On the other hand, estimating the frequencies based on English frequency data is likely quite inaccurate when compared to a possible estimation based on a reasonably sized Finnish language tagged corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion & Conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2077For an exploration of some features of translationese in EuroParl, see Koppel and Ordan (2011) .", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 95, |
| "text": "Koppel and Ordan (2011)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion & Conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Further work could address the issues with the gold standard by creating a crossdomain manually annotated corpus, ideally based on a corpus of text originally in Finnish. A training corpus could also be created manually, but this would be a much larger task. This would however allow a better MFS baseline to be created. A less work intensive way of improving the situation with the MFS baseline would be to add one based on the supervised training data, and consider this as an extra MFS baseline, only for supervised methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion & Conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The implementations of the techniques reimplemented for this evaluation and the scripts and configuration files for the adapted open source systems are publicly available under the Apache v2 license. To ease replicability further, the entire evaluation framework, including all the requirements, WSD systems and lexical resources are made available as a Docker image\u2078.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion & Conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2076https://github.com/LuminosoInsight/wordfreq", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Thanks to the anonymous reviewers for their useful comments. Thanks also to my wife Miia for helping with the Finnish abstract. Finally, thanks to my supervisor Michael Cochez for his valuable advice and comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The risk of sub-optimal use of open source nlp software: Ukb is inadvertently state-of-the-art in knowledgebased wsd", |
| "authors": [ |
| { |
| "first": "Eneko", |
| "middle": [], |
| "last": "Agirre", |
| "suffix": "" |
| }, |
| { |
| "first": "Oier", |
| "middle": [], |
| "last": "L\u00f3pez De Lacalle", |
| "suffix": "" |
| }, |
| { |
| "first": "Aitor", |
| "middle": [], |
| "last": "Soroa", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1805.04277" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eneko Agirre, Oier L\u00f3pez de Lacalle, and Aitor Soroa. 2018. The risk of sub-optimal use of open source nlp software: Ukb is inadvertently state-of-the-art in knowledge- based wsd. arXiv preprint arXiv:1805.04277 .", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Random walks for knowledge-based word sense disambiguation", |
| "authors": [ |
| { |
| "first": "Eneko", |
| "middle": [], |
| "last": "Agirre", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Computational Linguistics", |
| "volume": "40", |
| "issue": "1", |
| "pages": "57--84", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eneko Agirre, Oier L\u00f3pez de Lacalle, and Aitor Soroa. 2014. Random walks for knowledge-based word sense disambiguation. Computational Linguistics 40(1):57- 84.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A simple but tough-to-beat baseline for sentence embeddings", |
| "authors": [ |
| { |
| "first": "Sanjeev", |
| "middle": [], |
| "last": "Arora", |
| "suffix": "" |
| }, |
| { |
| "first": "Yingyu", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Tengyu", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat base- line for sentence embeddings .", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Adapting the Lesk algorithm for word sense disambiguation to WordNet", |
| "authors": [ |
| { |
| "first": "Satanjeev", |
| "middle": [], |
| "last": "Banerjee", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pedersen", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Satanjeev Banerjee and T Pedersen. 2002. Adapting the Lesk algorithm for word sense disambiguation to WordNet. Master's thesis, University of Minnesota.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "An enhanced lesk word sense disambiguation algorithm through a distributional semantic model", |
| "authors": [ |
| { |
| "first": "Pierpaolo", |
| "middle": [], |
| "last": "Basile", |
| "suffix": "" |
| }, |
| { |
| "first": "Annalina", |
| "middle": [], |
| "last": "Caputo", |
| "suffix": "" |
| }, |
| { |
| "first": "Giovanni", |
| "middle": [], |
| "last": "Semeraro", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "1591--1600", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pierpaolo Basile, Annalina Caputo, and Giovanni Semeraro. 2014. An en- hanced lesk word sense disambiguation algorithm through a distributional semantic model. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. pages 1591-1600. http://www.aclweb.org/anthology/C14-1151.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Eurosense: Automatic harvesting of multilingual sense annotations from parallel text", |
| "authors": [ |
| { |
| "first": "Claudio", |
| "middle": [ |
| "Delli" |
| ], |
| "last": "Bovi", |
| "suffix": "" |
| }, |
| { |
| "first": "Jose", |
| "middle": [], |
| "last": "Camacho-Collados", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Raganato", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "594--600", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Claudio Delli Bovi, Jose Camacho-Collados, Alessandro Raganato, and Roberto Nav- igli. 2017. Eurosense: Automatic harvesting of multilingual sense annotations from parallel text. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers). volume 2, pages 594-600.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Word translation without parallel data", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Lample", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc'aurelio", |
| "middle": [], |
| "last": "Ranzato", |
| "suffix": "" |
| }, |
| { |
| "first": "Ludovic", |
| "middle": [], |
| "last": "Denoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Herv\u00e9", |
| "middle": [], |
| "last": "J\u00e9gou", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1710.04087" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087 . \u2078https://github.com/frankier/finn-wsd-eval", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Senseval: The evaluation of word sense disambiguation systems", |
| "authors": [ |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Edmonds", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "7", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philip Edmonds. 2002. Senseval: The evaluation of word sense disambiguation sys- tems. volume 7. http://www2.denizyuret.com/ref/edmonds/edmonds2002-elra.pdf.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Word vectors, reuse, and replicability: Towards a community repository of large-text resources", |
| "authors": [ |
| { |
| "first": "Murhaf", |
| "middle": [], |
| "last": "Fares", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrey", |
| "middle": [], |
| "last": "Kutuzov", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Oepen", |
| "suffix": "" |
| }, |
| { |
| "first": "Erik", |
| "middle": [], |
| "last": "Velldal", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 21st Nordic Conference on Computational Linguistics", |
| "volume": "131", |
| "issue": "", |
| "pages": "271--276", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Murhaf Fares, Andrey Kutuzov, Stephan Oepen, and Erik Velldal. 2017. Word vectors, reuse, and replicability: Towards a community repository of large-text resources. In Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaL- iDa, 22-24 May 2017, Gothenburg, Sweden. Link\u00f6ping University Electronic Press, Link\u00f6pings universitet, 131, pages 271-276.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Embeddings for word sense disambiguation: An evaluation study", |
| "authors": [ |
| { |
| "first": "Ignacio", |
| "middle": [], |
| "last": "Iacobacci", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Taher Pilehvar", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Embeddings for word sense disambiguation: An evaluation study. In Proceedings of the 54th", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Annual Meeting of the Association for Computational Linguistics", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "1", |
| "issue": "", |
| "pages": "897--907", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 897-907.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Sensiting inflectionality: Estonian task for senseval-2", |
| "authors": [ |
| { |
| "first": "Neeme", |
| "middle": [], |
| "last": "Kahusk", |
| "suffix": "" |
| }, |
| { |
| "first": "Heili", |
| "middle": [], |
| "last": "Orav", |
| "suffix": "" |
| }, |
| { |
| "first": "Haldur", |
| "middle": [], |
| "last": "Oim", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "25--28", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Neeme Kahusk, Heili Orav, and Haldur Oim. 2001. Sensiting inflectionality: Estonian task for senseval-2. In Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems. Association for Computational Linguistics, pages 25-28. http://www.aclweb.org/anthology/S01-1006.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "English senseval: Report and results", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Kilgarriff", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Rosenzweig", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "LREC", |
| "volume": "6", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Kilgarriff and Joseph Rosenzweig. 2000. English senseval: Report and results. In LREC. volume 6, page 2.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Translationese and its dialects", |
| "authors": [ |
| { |
| "first": "Moshe", |
| "middle": [], |
| "last": "Koppel", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Ordan", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "1318--1326", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Moshe Koppel and Noam Ordan. 2011. Translationese and its dialects. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 1318-1326.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Finnwordnet-finnish wordnet by translation", |
| "authors": [ |
| { |
| "first": "Krister", |
| "middle": [], |
| "last": "Lind\u00e9n", |
| "suffix": "" |
| }, |
| { |
| "first": "Lauri", |
| "middle": [], |
| "last": "Carlson", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "LexicoNordica-Nordic Journal of Lexicography", |
| "volume": "17", |
| "issue": "", |
| "pages": "119--140", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Krister Lind\u00e9n and Lauri Carlson. 2010. Finnwordnet-finnish wordnet by translation. LexicoNordica-Nordic Journal of Lexicography 17:119-140.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Thinking About Foreign Policy: Finding an Appropriate Role for Artificially Intelligent Computers", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [ |
| "C" |
| ], |
| "last": "Mallery", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John C. Mallery. 1988. Thinking About Foreign Policy: Finding an Appropriate Role for Artificially Intelligent Computers. Master's thesis, Massachusetts Institute of Technology.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "context2vec: Learning generic context embedding with bidirectional lstm", |
| "authors": [ |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Melamud", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Goldberger", |
| "suffix": "" |
| }, |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "51--61", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional lstm. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. pages 51-61.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Semantic resources and their applications in Hungarian natural language processing", |
| "authors": [ |
| { |
| "first": "M\u00e1rton", |
| "middle": [], |
| "last": "Mih\u00e1ltz", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M\u00e1rton Mih\u00e1ltz. 2010. Semantic resources and their applications in Hungarian natural language processing. Ph.D. thesis, P\u00e1zm\u00e1ny P\u00e9ter Katolikus Egyetem.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Introduction to wordnet: An on-line lexical database", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "George", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "Christiane", |
| "middle": [], |
| "last": "Beckwith", |
| "suffix": "" |
| }, |
| { |
| "first": "Derek", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| }, |
| { |
| "first": "Katherine", |
| "middle": [ |
| "J" |
| ], |
| "last": "Gross", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "International journal of lexicography", |
| "volume": "3", |
| "issue": "4", |
| "pages": "235--244", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George A Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J Miller. 1990. Introduction to wordnet: An on-line lexical database. International journal of lexicography 3(4):235-244.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "A semantic concordance", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "George", |
| "suffix": "" |
| }, |
| { |
| "first": "Claudia", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "Randee", |
| "middle": [], |
| "last": "Leacock", |
| "suffix": "" |
| }, |
| { |
| "first": "Ross T", |
| "middle": [], |
| "last": "Tengi", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bunker", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the workshop on Human Language Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "303--308", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George A Miller, Claudia Leacock, Randee Tengi, and Ross T Bunker. 1993. A seman- tic concordance. In Proceedings of the workshop on Human Language Technology. Association for Computational Linguistics, pages 303-308.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Entity Linking meets Word Sense Disambiguation: a Unified Approach", |
| "authors": [ |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Moro", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Raganato", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Transactions of the Association for Computational Linguistics (TACL)", |
| "volume": "2", |
| "issue": "", |
| "pages": "231--244", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrea Moro, Alessandro Raganato, and Roberto Navigli. 2014. Entity Link- ing meets Word Sense Disambiguation: a Unified Approach. Trans- actions of the Association for Computational Linguistics (TACL) 2:231-244. http://www.aclweb.org/anthology/Q14-1019.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Babelnet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network", |
| "authors": [ |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| }, |
| { |
| "first": "Simone", |
| "middle": [ |
| "Paolo" |
| ], |
| "last": "Ponzetto", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Artificial Intelligence", |
| "volume": "193", |
| "issue": "", |
| "pages": "217--250", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.artint.2012.07.001" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2012. Babelnet: The automatic construc- tion, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence 193:217-250. https://doi.org/10.1016/j.artint.2012.07.001.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Supwsd: A flexible toolkit for supervised word sense disambiguation", |
| "authors": [ |
| { |
| "first": "Simone", |
| "middle": [], |
| "last": "Papandrea", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Raganato", |
| "suffix": "" |
| }, |
| { |
| "first": "Claudio", |
| "middle": [ |
| "Delli" |
| ], |
| "last": "Bovi", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "103--108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simone Papandrea, Alessandro Raganato, and Claudio Delli Bovi. 2017. Supwsd: A flexible toolkit for supervised word sense disambiguation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demon- strations. pages 103-108.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Development and use of computational morphology of finnish in the open source and open science era: Notes on experiences with omorfi development", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Tommi A Pirinen", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "SKY Journal of Linguistics", |
| "volume": "28", |
| "issue": "", |
| "pages": "381--393", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tommi A Pirinen. 2015. Development and use of computational morphology of finnish in the open source and open science era: Notes on experiences with omorfi devel- opment. SKY Journal of Linguistics 28:381-393.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Word sense disambiguation: A unified evaluation framework and empirical comparison", |
| "authors": [ |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Raganato", |
| "suffix": "" |
| }, |
| { |
| "first": "Jose", |
| "middle": [], |
| "last": "Camacho-Collados", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "99--110", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alessandro Raganato, Jose Camacho-Collados, and Roberto Navigli. 2017. Word sense disambiguation: A unified evaluation framework and empirical comparison. In Pro- ceedings of EACL. pages 99-110. https://aclanthology.info/pdf/E/E17/E17-1010.pdf.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Concatenated p-mean word embeddings as universal cross-lingual sentence representations", |
| "authors": [ |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "R\u00fcckl\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Steffen", |
| "middle": [], |
| "last": "Eger", |
| "suffix": "" |
| }, |
| { |
| "first": "Maxime", |
| "middle": [], |
| "last": "Peyrard", |
| "suffix": "" |
| }, |
| { |
| "first": "Iryna", |
| "middle": [], |
| "last": "Gurevych", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1803.01400" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andreas R\u00fcckl\u00e9, Steffen Eger, Maxime Peyrard, and Iryna Gurevych. 2018. Con- catenated p-mean word embeddings as universal cross-lingual sentence represen- tations. arXiv preprint arXiv:1803.01400 .", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Finnpos: an open-source morphological tagging and lemmatization toolkit for finnish", |
| "authors": [ |
| { |
| "first": "Miikka", |
| "middle": [], |
| "last": "Silfverberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Teemu", |
| "middle": [], |
| "last": "Ruokolainen", |
| "suffix": "" |
| }, |
| { |
| "first": "Krister", |
| "middle": [], |
| "last": "Lind\u00e9n", |
| "suffix": "" |
| }, |
| { |
| "first": "Mikko", |
| "middle": [], |
| "last": "Kurimo", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Language Resources and Evaluation", |
| "volume": "50", |
| "issue": "4", |
| "pages": "863--878", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miikka Silfverberg, Teemu Ruokolainen, Krister Lind\u00e9n, and Mikko Kurimo. 2016. Finnpos: an open-source morphological tagging and lemmatization toolkit for finnish. Language Resources and Evaluation 50(4):863-878.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "ConceptNet 5.5: An Open Multilingual Graph of General Knowledge", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Speer", |
| "suffix": "" |
| }, |
| { |
| "first": "Joshua", |
| "middle": [], |
| "last": "Chin", |
| "suffix": "" |
| }, |
| { |
| "first": "Catherine", |
| "middle": [], |
| "last": "Havasi", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert Speer, Joshua Chin, and Catherine Havasi. 2016. ConceptNet 5.5: An Open Multilingual Graph of General Knowledge .", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Computing machinery and intelligence", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Alan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Turing", |
| "suffix": "" |
| } |
| ], |
| "year": 1950, |
| "venue": "Mind", |
| "volume": "59", |
| "issue": "236", |
| "pages": "433--460", |
| "other_ids": { |
| "DOI": [ |
| "10.1093/mind/LIX.236.433" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan M Turing. 1950. Computing machinery and intelligence. Mind 59(236):433-460. https://doi.org/10.1093/mind/LIX.236.433.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Automatic wsd: Does it make sense of estonian?", |
| "authors": [ |
| { |
| "first": "Kadri", |
| "middle": [], |
| "last": "Vider", |
| "suffix": "" |
| }, |
| { |
| "first": "Kaarel", |
| "middle": [], |
| "last": "Kaljurand", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "159--162", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kadri Vider and Kaarel Kaljurand. 2001. Automatic wsd: Does it make sense of es- tonian? In Proceedings of SENSEVAL-2 Second International Workshop on Evaluat- ing Word Sense Disambiguation Systems. Association for Computational Linguistics, pages 159-162. http://www.aclweb.org/anthology/S01-1039.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "The johns hopkins senseval2 system descriptions", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Silviu", |
| "middle": [], |
| "last": "Cucerzan", |
| "suffix": "" |
| }, |
| { |
| "first": "Radu", |
| "middle": [], |
| "last": "Florian", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [], |
| "last": "Schafer", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Wicentowski", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "The Proceedings of the Second International Workshop on Evaluating Word Sense Disambiguation Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "163--166", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Yarowsky, Silviu Cucerzan, Radu Florian, Charles Schafer, and Richard Wicen- towski. 2001. The johns hopkins senseval2 system descriptions. In The Proceedings of the Second International Workshop on Evaluating Word Sense Disambiguation Sys- tems. Association for Computational Linguistics, pages 163-166.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "It makes sense: A wide-coverage word sense disambiguation system for free text", |
| "authors": [ |
| { |
| "first": "Zhi", |
| "middle": [], |
| "last": "Zhong", |
| "suffix": "" |
| }, |
| { |
| "first": "Hwee Tou", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the ACL 2010 system demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "78--83", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In Proceedings of the ACL 2010 system demon- strations. Association for Computational Linguistics, pages 78-83.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "text": "A diagram showing the pipeline to convert EuroSense to the unified format used for training and evaluation data. The number of annotations after various pipeline stages in millions are given, as are the proportion of annotations dropped by individual pipeline stages. The total proportion of Finnish annotations dropped is 22%.", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF1": { |
| "content": "<table><tr><td>Name</td><td>Training data</td><td colspan=\"2\">Dim Represents</td><td colspan=\"2\">Subword Cross-</td></tr><tr><td/><td/><td/><td/><td/><td>lingual</td></tr><tr><td>MUSE</td><td>Wikipedia &</td><td>300</td><td>Word forms</td><td>Yes</td><td>Yes</td></tr><tr><td>Supervised</td><td>bilingual</td><td/><td/><td/><td/></tr><tr><td>fastText\u1d43\u1d47</td><td>dictionary</td><td/><td/><td/><td/></tr><tr><td>ConceptNet</td><td>Wikipedia &</td><td>300</td><td>Lemmas &</td><td>-</td><td>Yes</td></tr><tr><td>Numberbatch</td><td>ConceptNet</td><td/><td>Multiwords</td><td/><td/></tr><tr><td>17.06\u1d9c\u1d48</td><td/><td/><td/><td/><td/></tr><tr><td>NLPL</td><td>Wikipedia &</td><td>100</td><td>Word forms</td><td>No</td><td>No</td></tr><tr><td>Word2Vec\u1d49\u1da0</td><td>CommonCrawl\u1d4d</td><td/><td/><td/><td/></tr></table>", |
| "text": "Word embeddings used", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF2": { |
| "content": "<table><tr><td>Family</td><td>System</td><td>Variant</td><td>F 1</td></tr><tr><td>Baseline</td><td>Limits Random sense</td><td>Floor Ceiling -</td><td>13.1% 99.9% 29.8%</td></tr><tr><td/><td>MFS</td><td>-</td><td>50.4%</td></tr><tr><td/><td/><td>No freq</td><td>51.8%</td></tr><tr><td>Knowledge</td><td>UKB</td><td>No freq + Extract Freq Freq + Extract</td><td>52.2% 54.5% 54.9%</td></tr><tr><td/><td>Cross-lingual Lesk</td><td>No freq Freq</td><td>32.6% -48.2%\u1d43 48.2% -52.4%\u1d43</td></tr><tr><td/><td/><td>No embeddings</td><td>72.9%</td></tr><tr><td/><td/><td>Word2Vec\u208bs</td><td>73.6%</td></tr><tr><td colspan=\"2\">Supervised SupWSD</td><td>Word2Vec fastText\u208bs</td><td>73.1% 73.3%</td></tr><tr><td/><td/><td>fastText</td><td>73.4%</td></tr><tr><td/><td>AWE-NN</td><td>-</td><td>72.9% -75.8%\u1d47</td></tr></table>", |
| "text": "Results of experiments", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF3": { |
| "content": "<table><tr><td colspan=\"2\">Freq Embedding</td><td>Agg</td><td colspan=\"4\">No expand No filter Filter No filter Filter Expand</td></tr><tr><td/><td/><td>AWE</td><td>37.6%</td><td>34.9%</td><td>40.1%</td><td>40.0%</td></tr><tr><td/><td>fastText</td><td>CATP3 CATP4</td><td>37.5% 37.2%</td><td>35.5% 35.2%</td><td>45.9% 44.0%</td><td>46.9% 45.2%</td></tr><tr><td/><td/><td>pre-SIF</td><td>35.3%</td><td>34.5%</td><td>41.8%</td><td>40.1%</td></tr><tr><td/><td/><td>AWE</td><td>34.3%</td><td>32.6%</td><td>33.1%</td><td>34.3%</td></tr><tr><td>No</td><td>Numberbatch</td><td>CATP3 CATP4</td><td>35.9% 35.6%</td><td>35.6% 35.4%</td><td>47.0% 45.5%</td><td>47.7% 46.2%</td></tr><tr><td/><td/><td>pre-SIF</td><td>33.3%</td><td>33.3%</td><td>35.3%</td><td>36.0%</td></tr><tr><td/><td/><td>AWE</td><td>36.7%</td><td>33.1%</td><td>37.1%</td><td>38.3%</td></tr><tr><td/><td>Concatenated</td><td>CATP3 CATP4</td><td>36.3% 36.3%</td><td>35.1% 35.3%</td><td>47.6% 45.9%</td><td>48.2% 46.6%</td></tr><tr><td/><td/><td>pre-SIF</td><td>33.8%</td><td>33.9%</td><td>40.0%</td><td>39.1%</td></tr><tr><td/><td/><td>AWE</td><td>49.4%</td><td>49.5%</td><td>50.1%</td><td>50.1%</td></tr><tr><td/><td>fastText</td><td>CATP3 CATP4</td><td>49.3% 49.3%</td><td>48.2% 48.3%</td><td>49.2% 49.5%</td><td>49.1% 49.4%</td></tr><tr><td/><td/><td>pre-SIF</td><td>52.2%</td><td>52.2%</td><td>52.4%</td><td>52.3%</td></tr><tr><td/><td/><td>AWE</td><td>49.7%</td><td>49.9%</td><td>50.5%</td><td>50.1%</td></tr><tr><td>Yes</td><td>Numberbatch</td><td>CATP3 CATP4</td><td>49.3% 49.5%</td><td>48.7% 49.1%</td><td>48.8% 49.0%</td><td>49.0% 49.2%</td></tr><tr><td/><td/><td>pre-SIF</td><td>52.0%</td><td>51.9%</td><td>51.9%</td><td>51.9%</td></tr><tr><td/><td/><td>AWE</td><td>49.4%</td><td>49.6%</td><td>50.6%</td><td>50.3%</td></tr><tr><td/><td>Concatenated</td><td>CATP3 CATP4</td><td>49.2% 49.3%</td><td>48.5% 48.9%</td><td>48.9% 49.1%</td><td>49.1% 49.3%</td></tr><tr><td/><td/><td>pre-SIF</td><td>52.3%</td><td>52.0%</td><td>51.6%</td><td>51.7%</td></tr></table>", |
| "text": "Results for variants of Lesk with cross-lingual word embeddings", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF4": { |
| "content": "<table><tr><td/><td colspan=\"4\">AWE CATP3 CATP4 pre-SIF</td></tr><tr><td>fastText</td><td>74.1%</td><td>74.1%</td><td>74.2%</td><td>74.1%</td></tr><tr><td colspan=\"2\">Numberbatch 74.5%</td><td>75.0%</td><td>74.9%</td><td>74.3%</td></tr><tr><td>Word2Vec</td><td>73.6%</td><td>72.9%</td><td>73.1%</td><td>73.8%</td></tr><tr><td>Concat 2\u1d43</td><td colspan=\"2\">75.1% 75.8%</td><td>75.5%</td><td>75.0%</td></tr><tr><td>Concat 3\u1d47</td><td>73.9%</td><td>73.2%</td><td>73.4%</td><td>74.5%</td></tr><tr><td colspan=\"3\">\u1d43 Concatenation of fastText and Numberbatch</td><td/><td/></tr><tr><td colspan=\"4\">\u1d47 Concatenation of fastText, Numberbatch and Word2Vec</td><td/></tr></table>", |
| "text": "Nearest neighbour using word embeddings", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| } |
| } |
| } |
| } |