| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:31:37.690761Z" |
| }, |
| "title": "Paradigm Clustering with Weighted Edit Distance", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Gerlach", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Colorado Boulder", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Wiemerslage", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Colorado Boulder", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Katharina", |
| "middle": [], |
| "last": "Kann", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Colorado Boulder", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper describes our system for the SIGMORPHON 2021 Shared Task on Unsupervised Morphological Paradigm Clustering, which asks participants to group inflected forms together according their underlying lemma without the aid of annotated training data. We employ agglomerative clustering to group word forms together using a metric that combines an orthographic distance and a semantic distance from word embeddings. We experiment with two variations of an edit distance-based model for quantifying orthographic distance, but, due to time constraints, our systems do not outperform the baseline. However, we also show that, with more time, our results improve strongly.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper describes our system for the SIGMORPHON 2021 Shared Task on Unsupervised Morphological Paradigm Clustering, which asks participants to group inflected forms together according their underlying lemma without the aid of annotated training data. We employ agglomerative clustering to group word forms together using a metric that combines an orthographic distance and a semantic distance from word embeddings. We experiment with two variations of an edit distance-based model for quantifying orthographic distance, but, due to time constraints, our systems do not outperform the baseline. However, we also show that, with more time, our results improve strongly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Most of the world's languages express grammatical properties, such as tense or case, via small changes to a word's surface form. This process is called morphological inflection, and the canonical form of a word is known as its lemma. A search of the WALS database of linguistic typology shows that 80% of the database's languages mark verb tense and 65% mark grammatical case through morphology (Dryer and Haspelmath, 2013) .", |
| "cite_spans": [ |
| { |
| "start": 395, |
| "end": 423, |
| "text": "(Dryer and Haspelmath, 2013)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The English lemma do, for instance, has an inflected form did that expresses past tense. Though English verbs inflect to express tense, there are generally only 4 to 5 surface variations for a given English lemma. In contrast, a Russian verb can have up to 30 morphological inflections per lemma, and other languagessuch as Basque -have hundreds of forms per lemma, cf. obtained from the lemma by adding -s or -es to the end of the noun, e.g., list/lists or kiss/kisses. However, irregular plurals also exist, such as ox/oxen or mouse/mice. Although irregular forms are less frequent, they cause challenges for the automatic generation or analysis of the surface forms of English plural nouns. In this work, we address the SIGMOR-PHON 2021 Shared Task on Unsupervised Morphological Paradigm Clustering (\"Task 2\") (Wiemerslage et al., 2021) . The goal of this shared task is to group words encountered in naturally occurring text into morphological paradigms. Unsupervised paradigm clustering can be helpful for state-of-the-art natural language processing (NLP) systems, which typically require large amounts of training data. The ability to group words together into paradigms is a useful first step for training a system to induce full paradigms from a limited number of examples, a task known as (supervised) morphological paradigm completion. Building paradigms can help an NLP system to induce representations for rare words or to generate words that have not been observed in a given corpus. Lastly, unsupervised systems have the advantage of not needing annotated data, which can be costly in terms of time and money, or, in the case of extinct or endangered languages, entirely impossible.", |
| "cite_spans": [ |
| { |
| "start": 813, |
| "end": 839, |
| "text": "(Wiemerslage et al., 2021)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Since 2016, the Association for Computational Linguistics' Special Interest Group on Computational Morphology and Phonology (SIGMORPHON) has created shared tasks to help spur the development of state-of-the-art systems to explicitly handle morphological processes in a language. These tasks have involved morphological inflection (Cotterell et al., 2016 ), lemmatization (McCarthy et al., 2019 , as well as other, related tasks. SIG-MORPHON has increased the level of difficulty of the shared tasks, largely along two dimensions. The first dimension is the amount of data available for models to learn, reflecting the difficulties of analyzing low-resource languages. The second dimension is the amount of structure provided in the input data. Initially, SIGMORPHON shared tasks provided predefined tables of lemmas, morphological tags, and inflected forms. For the SIGMORPHON 2021 Shared Task on Unsupervised Morphological Paradigm Clustering, only raw text is provided as input.", |
| "cite_spans": [ |
| { |
| "start": 330, |
| "end": 353, |
| "text": "(Cotterell et al., 2016", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 354, |
| "end": 393, |
| "text": "), lemmatization (McCarthy et al., 2019", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We propose a system that combines orthographic and semantic similarity measures to cluster surface forms found in raw text. We experiment with a character-level language model for weighing substring differences between words. Due to time constraints we are only able to cluster over a subset of each languages' vocabulary. Despite of this, our system's performance is comparable to the baseline.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Unsupervised morphology has attracted a great deal of interest historically, including a large body of work focused on segmentation (Xu et al., 2018; Creutz and Lagus, 2007; Poon et al., 2009; Narasimhan et al., 2015) . Recently, the task of unsupervised morphologi-cal paradigm completion has been proposed Jin et al., 2020; Erdmann et al., 2020) , wherein the goal is to induce full paradigms from raw text corpora.", |
| "cite_spans": [ |
| { |
| "start": 132, |
| "end": 149, |
| "text": "(Xu et al., 2018;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 150, |
| "end": 173, |
| "text": "Creutz and Lagus, 2007;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 174, |
| "end": 192, |
| "text": "Poon et al., 2009;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 193, |
| "end": 217, |
| "text": "Narasimhan et al., 2015)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 308, |
| "end": 325, |
| "text": "Jin et al., 2020;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 326, |
| "end": 347, |
| "text": "Erdmann et al., 2020)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this year's SIGMORPHON shared task, we are asked to only address part of the unsupervised paradigm completion task: paradigm clustering. Intuitively, the task of segmentation is related to paradigm clustering, but the outputs are different. Goldsmith (2001) produces morphological signatures, which are similar to approximate paradigms, based on an algorithm that uses minimum description length. However, this type of algorithm relies heavily on purely orthographic features of the vocabulary. Schone and Jurafsky (2001) hypothesize that approximating semantic information can help differentiate between hypothesized morphemes, revealing those that are productive. They propose an algorithm that combines orthography, semantics, and syntactic distributions to induce morphological relationships. They used semantic relatedness, quantified by latent semantic analysis, combined with the frequencies of affixes and syntactic context (Schone and Jurafsky, 2000) .", |
| "cite_spans": [ |
| { |
| "start": 244, |
| "end": 260, |
| "text": "Goldsmith (2001)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 498, |
| "end": 524, |
| "text": "Schone and Jurafsky (2001)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 935, |
| "end": 962, |
| "text": "(Schone and Jurafsky, 2000)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "More recently, Soricut and Och (2015) have used SkipGram word embeddings (Mikolov et al., 2013) to find meaningful morphemes based on analogies: regularities exhibited by embedding spaces allow for inferences of certain types (e.g., king is to man what queen is to woman). Hypothesizing that these regularities also hold for morphological relations, they represent morphemes by vector differences between semantically similar forms, e.g., the vector for the suffix \u2212 \u2192 s may be represented by the difference between \u2212 \u2212 \u2192 cats and \u2212 \u2192 cat.", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 95, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Drawing upon these intuitions, we follow Rosa and Zabokrtsk\u00fd (2019), which combines semantic distance using fastText embeddings (Bojanowski et al., 2017) with an orthographic distance between word pairs. Words are then clustered into paradigms using agglomerative clustering.", |
| "cite_spans": [ |
| { |
| "start": 128, |
| "end": 153, |
| "text": "(Bojanowski et al., 2017)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Given a raw text corpus, the task is to sort words into clusters that correspond to paradigms. More formally, for the vocabulary \u03a3 of all types attested in the corpus and the set of morphological paradigms \u03a0 for which at least one word is in \u03a3, the goal is to output clusters corresponding to \u03c0 k \u03a3 for all \u03c0 k \u2208 \u03a0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Data As the raw text data for this task, JHU Bible corpora (McCarthy et al., 2020b) are provided by the organizers. This is the only data that systems can use. The organizers further provide development and test sets consisting of gold clusters for a subset of words in the Bible corpora. Each cluster is a list of words representing \u03c0 k \u03a3 for \u03c0 k \u2208 \u03a0 dev or \u03c0 k \u2208 \u03a0 test , respectively, and", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 83, |
| "text": "(McCarthy et al., 2020b)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u03a0 dev , \u03a0 test \u03a0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The partial morphological paradigms in \u03a0 dev and \u03a0 test are taken from the UniMorph database (McCarthy et al., 2020a) . Development sets are only available for the development languages, while test sets are only provided for the test languages. All test sets are hidden from the participants until the conclusion of the shared task.", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 117, |
| "text": "(McCarthy et al., 2020a)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Languages The development languages featured in the shared task are Maltese, Persian, Portuguese, Russian, and Swedish. The test languages are Basque, Bulgarian, English, Finnish, German, Kannada, Navajo, Spanish, and Turkish.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We submit two systems based on Rosa and Zabokrtsk\u00fd (2019) . The first, referred to below as JW-based clustering, follows their work very closely. The second, LM-based clustering, contains the same main components, but approximates orthographic distances with the help of a language model.", |
| "cite_spans": [ |
| { |
| "start": 31, |
| "end": 57, |
| "text": "Rosa and Zabokrtsk\u00fd (2019)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Descriptions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We describe the system of Rosa and Zabokrtsk\u00fd (2019) in more detail here. This system clusters over words whose distance is computed as a combination of orthographic and semantic distances.", |
| "cite_spans": [ |
| { |
| "start": 26, |
| "end": 52, |
| "text": "Rosa and Zabokrtsk\u00fd (2019)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "JW-based Clustering", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Orthographic Distance The orthographic distance of two words is computed as their Jaro-Winkler (JW) edit distance (Winkler, 1990) . JW distance differs from the more common Levenshtein distance (Levenshtein, 1966) in that JW distance gives more importance to the beginnings of strings than to their ends, which is where characters belonging to the stem are likely to be in suffixing languages.", |
| "cite_spans": [ |
| { |
| "start": 114, |
| "end": 129, |
| "text": "(Winkler, 1990)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 194, |
| "end": 213, |
| "text": "(Levenshtein, 1966)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "JW-based Clustering", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The JW distance is averaged with the JW distance of a simplified variant of the string. The simplified variant is a string that has been lower cased, transliterated to ASCII, and had the non-initial vowels deleted. This is done to soften the impact of characters that are likely to correspond with affixes. Crucially, we believe that this biases the system towards languages that express inflection via suffixation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "JW-based Clustering", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We represent words in the corpus by fastText embeddings, similar to Erdmann and Habash (2018), who cluster fast-Text embeddings for the same task in various Arabic dialects. We expect fastText embeddings to provide better representations than, e.g., Word2Vec (Mikolov et al., 2013) , due to the limited size of the Bible corpora. Unfortunately, using fastText may also inadvertently result in higher similarity between words belonging to different lemmas that contain overlapping subwords corresponding to affixes.", |
| "cite_spans": [ |
| { |
| "start": 259, |
| "end": 281, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Distance", |
| "sec_num": null |
| }, |
| { |
| "text": "Overall Distance We compute a pairwise distance matrix for all words in the corpus. The distance between two words w 1 and w 2 is computed as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Distance", |
| "sec_num": null |
| }, |
| { |
| "text": "d(w 1 , w 2 ) = 1 \u2212 \u03b4(w 1 , w 2 ) \u2022", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Distance", |
| "sec_num": null |
| }, |
| { |
| "text": "cos(\u0175 1 ,\u0175 2 ) + 1 2 , (1) where\u0175 1 and\u0175 2 are the embeddings of w 1 and w 2 , cos is the cosine distance, and \u03b4 is the JW edit distance. The cosine distance is mapped to [0, 1] to avoid negative distances.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Distance", |
| "sec_num": null |
| }, |
| { |
| "text": "Finally, agglomerative clustering is performed by first assigning each word form to a unique cluster. At each step, the two clusters with the lowest average distance are merged together. The merging continues while the distance between clusters stays below a threshold. We tune this hyperparameter on the development set, and our final threshold is 0.3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Distance", |
| "sec_num": null |
| }, |
| { |
| "text": "The JW-based clustering described above relies on heuristics to obtain a good measure of orthographic similarity. These heuristics help to quantify orthographic similarity between two words by relying more on the shared characters in the stem than in the affix: The plural past participles gravados and louvados in Portuguese have longer substrings in common than the substrings by which they differ. This is due to the affix -ados, which indicates that the two words express the same inflectional information, even though their lemmas are different. Similarly, the Portuguese verbs abafa and abaf\u00e1vamos differ in many characters, though they belong to the same paradigm, as can be observed by the shared stem abaf.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LM-based Clustering", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "However, not all languages express inflection exclusively via suffixation, nor via concatenation. We thus experiment with removing the edit distance heuristics and, instead, utilizing probabilities from a character-level language model (LM) to distinguish between stems and affixes. In doing so, we hope to achieve better results for templatic languages, such as Maltese. We hypothesize that the LM will have a higher confidence for characters that are part of an affix than for those that are part of the stem. We then draw upon this hypothesis and weigh edit operations between two strings based on these confidences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LM-based Clustering", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "LM-weighted Edit Distance Similar to the intuition behind Silfverberg and Hulden (2018), we train a character-level LM on the entire vocabulary for each Bible corpus. Unlike their work, we do not have inflectional tags for each word. Despite this, we hypothesize that the highly regular and frequent nature of inflectional affixes will lead to higher likelihoods for characters that occur in affixes than for those in stems. We train a two-layer LSTM (Hochreiter and Schmidhuber, 1997) with an embedding size of 128 and a hidden layer size of 128. We train the model until the training loss stops decreasing, for up to 100 epochs, using Adam (Kingma and Ba, 2014) with a learning rate of 0.001 and a batch size of 16.", |
| "cite_spans": [ |
| { |
| "start": 451, |
| "end": 485, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LM-based Clustering", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "When calculating the edit distance between two words, the insertion, deletion, or substitution costs are computed as a function of the LM probabilities. We expect this to give more weights to differences in the stem than to those in other parts of the word. Each character is then associated with a cost given by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LM-based Clustering", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "cost(w i ) = 1 \u2212 p(w i ) j\u2208|w| p(w j ) ,", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "LM-based Clustering", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "where p(w i ) is the probability of the ith character in word w as given by the LM. We then compute the cost of an insertion or deletion as the cost of the character being inserted or deleted. The cost of a substitution is the average of the costs of the two involved characters. The sum over these operations is the weighted edit distance between two words, (w 1 , w 2 ). Finally, we compute pairwise distances using Equation 1, replacing \u03b4(w 1 , w 2 ) with", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LM-based Clustering", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "(w 1 , w 2 ) max(|w 1 |, |w 2 |)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LM-based Clustering", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": ".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LM-based Clustering", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Forward vs. Backward LM We hypothesize that the direction in which the LM is trained affects the probabilities for affixes. Intuitively, an LM is likely to assign higher confidence to characters at the beginning of a word than at the end. Thus, an LM trained on data in the forward direction (LM-F) should be more likely to assign higher probabilities to characters at the beginning of a word, such as prefixes, while a model trained on reversed words (LM-B) should assign higher probabilities to suffixes. In practice, LM-B outperforms LM-F on all development languages, cf. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LM-based Clustering", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The official scores obtained by our systems as well as the baseline are shown in Table 3 . Both of our systems perform minimally worse than the baseline if we consider F1 averaged over languages (0.334 vs. 0.328 and 0.327). However, we believe this to be largely due to our submissions only generating clusters for a subset of the full vocabularies: due to time constraints, we only consider words that appear at least 5 times in the corpus. No other words are included in the predicted clusters. The large gap between precision and recall reflects this constraint: our submissions have a high average precision (0.646 for both systems), indicating that the limited set of words we consider are being clustered more accurately than the F1 scores would suggest. The low recall scores (0.225 and 0.223) are likely at least partially caused by the missing words in our predictions. 2 Conversely, the baseline system has a high recall (0.629) and a low precision (0.233). This is likely due to it simply clustering words with shared substrings, such that a given word is likely to appear in many predicted clusters.", |
| "cite_spans": [ |
| { |
| "start": 879, |
| "end": 880, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 81, |
| "end": 88, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Interestingly, both of our submissions have the same average precision on the test set, despite varying across languages. Notably, the LM-based clustering system strongly outperforms the JW-based system on Basque with respect to precision. However, the JW-based system outperforms the LM-based one by a large margin on English. One hypothesis for the difference in results is that agglutinating inflection in Basque causes very long affixes, which our LM-based system should downweigh in its measurement of orthographic similarity. Basque is also not a strictly suffixing language, which we expect the JW-based model to be biased towards. On the other hand, English has relatively little inflectional morphology, and is strictly suffixing (in terms of inflection). The assumptions behind the JW-based system are more ideal for a language like English. The JW system performs best on Maltese, which suggests that the heuristics of that system are sufficient for a templatic language, compared to the LM-based system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We present two systems for the SIGMOR-PHON 2021 Shared Task on Unsupervised Morphological Paradigm Clustering. Both of our systems perform slighly worse than the official baseline. However, we also show that this is due to our official submissions only making predictions for a subset of the corpus' vocabulary, due to time constraints and that at least one of our systems improves strongly if the time constraints are removed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "This might be caused by none of the development languages being prefixing. However, in order to make a more informed choice, a method to automatically distinguish between prefixing and suffixing languages from raw text alone would be necessary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We confirm this hypothesis with additional experiments after the shared task's completion. Those results can be found in the appendix.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Here we present new results which include the entire data set for selected languages. We see an improvement in F1 for each language. This due to the increased recall scores from the paradigms being more complete. Precision scores decrease across the board. This may be due to the languages being sensitive to the threshold value. Table 4 : Post-shared task results using the full data set for selected languages. These results use LM-B with a threshold value of 0.3.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 330, |
| "end": 337, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Appendix", |
| "sec_num": "7" |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Enriching word vectors with subword information", |
| "authors": [ |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Bojanowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "Armand", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "5", |
| "issue": "", |
| "pages": "135--146", |
| "other_ids": { |
| "DOI": [ |
| "10.1162/tacl_a_00051" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The SIGMORPHON 2016 shared taskmorphological reinflection", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "" |
| }, |
| { |
| "first": "Christo", |
| "middle": [], |
| "last": "Kirov", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Sylak-Glassman", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Meeting of SIGMORPHON", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared task- morphological reinflection. In Proceedings of the 2016 Meeting of SIGMORPHON, Berlin, Germany. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Unsupervised models for morpheme segmentation and morphology learning", |
| "authors": [ |
| { |
| "first": "Mathias", |
| "middle": [], |
| "last": "Creutz", |
| "suffix": "" |
| }, |
| { |
| "first": "Krista", |
| "middle": [], |
| "last": "Lagus", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "4", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/1187415.1187418" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphol- ogy learning. 4(1).", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "WALS Online. Max Planck Institute for Evolutionary Anthropology", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [ |
| "S" |
| ], |
| "last": "Dryer", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Haspelmath", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evo- lutionary Anthropology, Leipzig.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The paradigm discovery problem", |
| "authors": [ |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Erdmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Micha", |
| "middle": [], |
| "last": "Elsner", |
| "suffix": "" |
| }, |
| { |
| "first": "Shijie", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "" |
| }, |
| { |
| "first": "Nizar", |
| "middle": [], |
| "last": "Habash", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "7778--7790", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.acl-main.695" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexander Erdmann, Micha Elsner, Shijie Wu, Ryan Cotterell, and Nizar Habash. 2020. The paradigm discovery problem. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7778-7790, Online. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Complementary strategies for low resourced morphological modeling", |
| "authors": [ |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Erdmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Nizar", |
| "middle": [], |
| "last": "Habash", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
| "volume": "", |
| "issue": "", |
| "pages": "54--65", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W18-5806" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexander Erdmann and Nizar Habash. 2018. Comple- mentary strategies for low resourced morphological modeling. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonol- ogy, and Morphology, pages 54-65, Brussels, Bel- gium. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Unsupervised learning of the morphology of a natural language", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Goldsmith", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Computational linguistics", |
| "volume": "27", |
| "issue": "2", |
| "pages": "153--198", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational linguistics, 27(2):153-198.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Unsupervised morphological paradigm completion", |
| "authors": [ |
| { |
| "first": "Huiming", |
| "middle": [], |
| "last": "Jin", |
| "suffix": "" |
| }, |
| { |
| "first": "Liwei", |
| "middle": [], |
| "last": "Cai", |
| "suffix": "" |
| }, |
| { |
| "first": "Yihui", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| }, |
| { |
| "first": "Chen", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| }, |
| { |
| "first": "Arya", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Katharina", |
| "middle": [], |
| "last": "Kann", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "6696--6707", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.acl-main.598" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Huiming Jin, Liwei Cai, Yihui Peng, Chen Xia, Arya McCarthy, and Katharina Kann. 2020. Unsuper- vised morphological paradigm completion. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6696- 6707, Online. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The SIGMORPHON 2020 shared task on unsupervised morphological paradigm completion", |
| "authors": [ |
| { |
| "first": "Katharina", |
| "middle": [], |
| "last": "Kann", |
| "suffix": "" |
| }, |
| { |
| "first": "Arya", |
| "middle": [ |
| "D" |
| ], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Garrett", |
| "middle": [], |
| "last": "Nicolai", |
| "suffix": "" |
| }, |
| { |
| "first": "Mans", |
| "middle": [], |
| "last": "Hulden", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
| "volume": "", |
| "issue": "", |
| "pages": "51--62", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.sigmorphon-1.3" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katharina Kann, Arya D. McCarthy, Garrett Nicolai, and Mans Hulden. 2020. The SIGMORPHON 2020 shared task on unsupervised morphological paradigm completion. In Proceedings of the 17th SIGMORPHON Workshop on Computational Re- search in Phonetics, Phonology, and Morphology, pages 51-62, Online. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1412.6980" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Binary codes capable of correcting deletions, insertions and reversals", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Vladimir Iosifovich Levenshtein", |
| "suffix": "" |
| } |
| ], |
| "year": 1966, |
| "venue": "Soviet Physics Doklady", |
| "volume": "10", |
| "issue": "8", |
| "pages": "707--710", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vladimir Iosifovich Levenshtein. 1966. Binary codes capable of correcting deletions, insertions and re- versals. Soviet Physics Doklady, 10(8):707-710.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Cotterell, Mans Hulden, and David Yarowsky. 2020a. UniMorph 3.0: Universal Morphology", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Arya", |
| "suffix": "" |
| }, |
| { |
| "first": "Christo", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Matteo", |
| "middle": [], |
| "last": "Kirov", |
| "suffix": "" |
| }, |
| { |
| "first": "Amrit", |
| "middle": [], |
| "last": "Grella", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Nidhi", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyle", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| }, |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Gorman", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabrina", |
| "middle": [ |
| "J" |
| ], |
| "last": "Vylomova", |
| "suffix": "" |
| }, |
| { |
| "first": "Garrett", |
| "middle": [], |
| "last": "Mielke", |
| "suffix": "" |
| }, |
| { |
| "first": "Miikka", |
| "middle": [], |
| "last": "Nicolai", |
| "suffix": "" |
| }, |
| { |
| "first": "Timofey", |
| "middle": [], |
| "last": "Silfverberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Nataly", |
| "middle": [], |
| "last": "Arkhangelskiy", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Krizhanovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Elena", |
| "middle": [], |
| "last": "Krizhanovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexey", |
| "middle": [], |
| "last": "Klyachko", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Sorokin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mansfield", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "3922--3931", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arya D. McCarthy, Christo Kirov, Matteo Grella, Amrit Nidhi, Patrick Xia, Kyle Gorman, Ekate- rina Vylomova, Sabrina J. Mielke, Garrett Nico- lai, Miikka Silfverberg, Timofey Arkhangelskiy, Na- taly Krizhanovsky, Andrew Krizhanovsky, Elena Klyachko, Alexey Sorokin, John Mansfield, Valts Ern\u0161treits, Yuval Pinter, Cassandra L. Jacobs, Ryan Cotterell, Mans Hulden, and David Yarowsky. 2020a. UniMorph 3.0: Universal Morphology. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 3922-3931, Mar- seille, France. European Language Resources Asso- ciation.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The SIGMORPHON 2019 shared task: Morphological analysis in context and cross-lingual transfer for inflection", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Arya", |
| "suffix": "" |
| }, |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Shijie", |
| "middle": [], |
| "last": "Vylomova", |
| "suffix": "" |
| }, |
| { |
| "first": "Chaitanya", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Lawrence", |
| "middle": [], |
| "last": "Malaviya", |
| "suffix": "" |
| }, |
| { |
| "first": "Garrett", |
| "middle": [], |
| "last": "Wolf-Sonkin", |
| "suffix": "" |
| }, |
| { |
| "first": "Christo", |
| "middle": [], |
| "last": "Nicolai", |
| "suffix": "" |
| }, |
| { |
| "first": "Miikka", |
| "middle": [], |
| "last": "Kirov", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabrina", |
| "middle": [ |
| "J" |
| ], |
| "last": "Silfverberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Mielke", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Heinz", |
| "suffix": "" |
| }, |
| { |
| "first": "Mans", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hulden", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
| "volume": "", |
| "issue": "", |
| "pages": "229--244", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W19-4226" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arya D. McCarthy, Ekaterina Vylomova, Shijie Wu, Chaitanya Malaviya, Lawrence Wolf-Sonkin, Gar- rett Nicolai, Christo Kirov, Miikka Silfverberg, Sab- rina J. Mielke, Jeffrey Heinz, Ryan Cotterell, and Mans Hulden. 2019. The SIGMORPHON 2019 shared task: Morphological analysis in context and cross-lingual transfer for inflection. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 229- 244, Florence, Italy. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "The Johns Hopkins University Bible corpus: 1600+ tongues for typological exploration", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Arya", |
| "suffix": "" |
| }, |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Dylan", |
| "middle": [], |
| "last": "Wicks", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Winston", |
| "middle": [], |
| "last": "Mueller", |
| "suffix": "" |
| }, |
| { |
| "first": "Oliver", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Garrett", |
| "middle": [], |
| "last": "Adams", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Nicolai", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Post", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "2884--2892", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arya D. McCarthy, Rachel Wicks, Dylan Lewis, Aaron Mueller, Winston Wu, Oliver Adams, Garrett Nico- lai, Matt Post, and David Yarowsky. 2020b. The Johns Hopkins University Bible corpus: 1600+ tongues for typological exploration. In Proceed- ings of the 12th Language Resources and Evaluation Conference, pages 2884-2892, Marseille, France. European Language Resources Association.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Efficient estimation of word representations in vector space", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "An unsupervised method for uncovering morphological chains", |
| "authors": [ |
| { |
| "first": "Karthik", |
| "middle": [], |
| "last": "Narasimhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| }, |
| { |
| "first": "Tommi", |
| "middle": [], |
| "last": "Jaakkola", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "3", |
| "issue": "", |
| "pages": "157--167", |
| "other_ids": { |
| "DOI": [ |
| "10.1162/tacl_a_00130" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karthik Narasimhan, Regina Barzilay, and Tommi Jaakkola. 2015. An unsupervised method for un- covering morphological chains. Transactions of the Association for Computational Linguistics, 3:157- 167.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Unsupervised morphological segmentation with log-linear models", |
| "authors": [ |
| { |
| "first": "Hoifung", |
| "middle": [], |
| "last": "Poon", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Cherry", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "209--217", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hoifung Poon, Colin Cherry, and Kristina Toutanova. 2009. Unsupervised morphological segmentation with log-linear models. In Proceedings of Human Language Technologies: The 2009 Annual Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, pages 209-217, Boulder, Colorado. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Unsupervised lemmatization as embeddings-based word clustering", |
| "authors": [ |
| { |
| "first": "Rudolf", |
| "middle": [], |
| "last": "Rosa", |
| "suffix": "" |
| }, |
| { |
| "first": "Zdenek", |
| "middle": [], |
| "last": "Zabokrtsk\u00fd", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rudolf Rosa and Zdenek Zabokrtsk\u00fd. 2019. Unsu- pervised lemmatization as embeddings-based word clustering. CoRR, abs/1908.08528.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Knowledgefree induction of morphology using latent semantic analysis", |
| "authors": [ |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Schone", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patrick Schone and Daniel Jurafsky. 2000. Knowledge- free induction of morphology using latent semantic analysis. In Fourth Conference on Computational Natural Language Learning and the Second Learn- ing Language in Logic Workshop.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Knowledgefree induction of inflectional morphologies", |
| "authors": [ |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Schone", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Second Meeting of the North American Chapter", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patrick Schone and Daniel Jurafsky. 2001. Knowledge- free induction of inflectional morphologies. In Sec- ond Meeting of the North American Chapter of the Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "An encoder-decoder approach to the paradigm cell filling problem", |
| "authors": [ |
| { |
| "first": "Miikka", |
| "middle": [], |
| "last": "Silfverberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Mans", |
| "middle": [], |
| "last": "Hulden", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2883--2889", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D18-1315" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miikka Silfverberg and Mans Hulden. 2018. An encoder-decoder approach to the paradigm cell fill- ing problem. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 2883-2889, Brussels, Belgium. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Unsupervised morphology induction using word embeddings", |
| "authors": [ |
| { |
| "first": "Radu", |
| "middle": [], |
| "last": "Soricut", |
| "suffix": "" |
| }, |
| { |
| "first": "Franz", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1627--1637", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/N15-1186" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Radu Soricut and Franz Och. 2015. Unsupervised mor- phology induction using word embeddings. In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1627-1637, Denver, Colorado. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Miikka Silfverberg, Mans Hulden, and Katharina Kann. 2021. The SIGMORPHON 2021 shared task on unsupervised morphological paradigm clustering", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Wiemerslage", |
| "suffix": "" |
| }, |
| { |
| "first": "Arya", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Erdmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Garrett", |
| "middle": [], |
| "last": "Nicolai", |
| "suffix": "" |
| }, |
| { |
| "first": "Manex", |
| "middle": [], |
| "last": "Agirrezabal", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Wiemerslage, Arya McCarthy, Alexander Erd- mann, Garrett Nicolai, Manex Agirrezabal, Miikka Silfverberg, Mans Hulden, and Katharina Kann. 2021. The SIGMORPHON 2021 shared task on un- supervised morphological paradigm clustering. In Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonol- ogy, and Morphology. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "String comparator metrics and enhanced decision rules in the fellegi-sunter model of record linkage", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "William", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Winkler", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Proceedings of the Section on Survey Research", |
| "volume": "", |
| "issue": "", |
| "pages": "354--359", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William E. Winkler. 1990. String comparator met- rics and enhanced decision rules in the fellegi-sunter model of record linkage. In Proceedings of the Sec- tion on Survey Research, pages 354-359.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Unsupervised cross-lingual transfer of word embedding spaces", |
| "authors": [ |
| { |
| "first": "Ruochen", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yiming", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Naoki", |
| "middle": [], |
| "last": "Otani", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuexin", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D18-1268" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ruochen Xu, Yiming Yang, Naoki Otani, and Yuexin Wu. 2018. Unsupervised cross-lingual transfer of word embedding spaces. In Proceedings of the 2018", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Conference on Empirical Methods in Natural Language Processing", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "2465--2474", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Conference on Empirical Methods in Natural Lan- guage Processing, pages 2465-2474, Brussels, Bel- gium. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "text": "Inflected forms are systematically related to each other: in English, most noun plurals are", |
| "type_str": "table", |
| "content": "<table><tr><td>Basque Lemma: egin</td><td/><td/></tr><tr><td>begi</td><td>begiate</td><td>begidate</td></tr><tr><td>begie</td><td>begiete</td><td>begigu</td></tr><tr><td>begigute</td><td>begik</td><td>begin</td></tr><tr><td>beginate</td><td>begio</td><td>begiote</td></tr><tr><td>begit</td><td>begite</td><td>begitza</td></tr><tr><td>...</td><td>...</td><td>...</td></tr><tr><td>zenegizkigukeen</td><td colspan=\"2\">zenegizkigukete zenegizkiguketen</td></tr><tr><td>zenegizkigun</td><td>zenegizkigute</td><td>zenegizkiguten</td></tr><tr><td>zenegizkio</td><td>zenegizkioke</td><td>zenegizkiokeen</td></tr><tr><td>zenegizkiokete</td><td colspan=\"2\">zenegizkioketen zenegizkion</td></tr><tr><td>zenegizkiote</td><td>zenegizkioten</td><td>zenegizkit</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF1": { |
| "text": "The paradigm of the Basque verb egin consists of 674 inflected forms. In contrast, the paradigm of the English verb do only consists of 5 inflected forms: do, does, doing, did, and done.", |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF2": { |
| "text": ".348 0.291 0.465 0.229 0.307 0.411 0.202 0.272 0.489 0.241 0.323 Persian 0.265 0.348 0.300 0.321 0.307 0.314 0.494 0.197 0.282 0.579 0.231 0.330 Portuguese 0.218 0.794 0.341 0.771 0.248 0.376 0.494 0.159 0.241 0.742 0.239 0.362 Russian 0.234 0.807 0.363 0.802 0.282 0.417 0.726 0.255 0.378 0.792 0.278 0.412 Swedish 0.303 0.776 0.436 0.818 0.378 0.517 0.695 0.321 0.439 0.838 0.388 0.530 Average 0.254 0.615 0.346 0.635 0.289 0.386 0.482 0.186 0.268 0.688 0.275 0.391", |
| "type_str": "table", |
| "content": "<table><tr><td>. Be-</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF3": { |
| "text": "Precision, recall, and F1 for all development languages. LMC-R is the LM-clustering system for language models trained from left-to-right (reverse). LMC-F are trained from left-to-right, and JWC is the JW-clustering system. The highest F1 for each language is in bold.", |
| "type_str": "table", |
| "content": "<table><tr><td>Lang</td><td/><td>Baseline</td><td/><td/><td>LMC</td><td/><td/><td>JWC</td><td/></tr><tr><td/><td>prec.</td><td>rec.</td><td>F1</td><td>prec.</td><td>rec.</td><td>F1</td><td>prec.</td><td>rec.</td><td>F1</td></tr><tr><td>English</td><td>0.388</td><td>0.767</td><td>0.515</td><td>0.565</td><td>0.245</td><td>0.3420</td><td>0.663</td><td>0.288</td><td>0.402</td></tr><tr><td>Navajo</td><td>0.230</td><td>0.598</td><td>0.333</td><td>0.686</td><td>0.112</td><td>0.1928</td><td>0.657</td><td>0.108</td><td>0.185</td></tr><tr><td>Spanish</td><td>0.266</td><td>0.722</td><td>0.388</td><td>0.664</td><td>0.183</td><td>0.2869</td><td>0.699</td><td>0.193</td><td>0.302</td></tr><tr><td>Finnish</td><td>0.179</td><td>0.767</td><td>0.290</td><td>0.694</td><td>0.227</td><td>0.342</td><td>0.674</td><td>0.220</td><td>0.332</td></tr><tr><td>Bulgarian</td><td>0.265</td><td>0.730</td><td>0.390</td><td>0.745</td><td>0.312</td><td>0.440</td><td>0.717</td><td>0.300</td><td>0.423</td></tr><tr><td>Basque</td><td>0.186</td><td>0.254</td><td>0.215</td><td>0.471</td><td>0.254</td><td>0.330</td><td>0.353</td><td>0.191</td><td>0.247</td></tr><tr><td>Kannada</td><td>0.172</td><td>0.385</td><td>0.238</td><td>0.570</td><td>0.169</td><td>0.261</td><td>0.625</td><td>0.185</td><td>0.286</td></tr><tr><td>German</td><td>0.254</td><td>0.776</td><td>0.382</td><td>0.7626</td><td>0.310</td><td>0.441</td><td>0.787</td><td>0.319</td><td>0.454</td></tr><tr><td>Turkish</td><td>0.156</td><td>0.658</td><td>0.252</td><td>0.6574</td><td>0.212</td><td>0.320</td><td>0.641</td><td>0.206</td><td>0.312</td></tr><tr><td>Average</td><td>0.233</td><td>0.629</td><td>0.334</td><td>0.646</td><td>0.225</td><td>0.328</td><td>0.646</td><td>0.223</td><td>0.327</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF4": { |
| "text": "Precision, recall, and F1 for all test languages. LMC is the LM-clustering system, JWC is the JWclustering system. The highest F1 for each language is in bold.", |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "num": null |
| } |
| } |
| } |
| } |