ACL-OCL / Base_JSON /prefixW /json /W12 /W12-0215.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W12-0215",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:11:24.517883Z"
},
"title": "Using context and phonetic features in models of etymological sound change",
"authors": [
{
"first": "Hannes",
"middle": [],
"last": "Wettig",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kirill",
"middle": [],
"last": "Reshetnikov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Helsinki",
"location": {
"country": "Finland Academy of Sciences"
}
},
"email": ""
},
{
"first": "Roman",
"middle": [],
"last": "Yangarber",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a novel method for aligning etymological data, which models context-sensitive rules governing sound change, and utilizes phonetic features of the sounds. The goal is, for a given corpus of cognate sets, to find the best alignment at the sound level. We introduce an imputation procedure to compare the goodness of the resulting models, as well as the goodness of the data sets. We present evaluations to demonstrate that the new model yields improvements in performance, compared to previously reported models.",
"pdf_parse": {
"paper_id": "W12-0215",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a novel method for aligning etymological data, which models context-sensitive rules governing sound change, and utilizes phonetic features of the sounds. The goal is, for a given corpus of cognate sets, to find the best alignment at the sound level. We introduce an imputation procedure to compare the goodness of the resulting models, as well as the goodness of the data sets. We present evaluations to demonstrate that the new model yields improvements in performance, compared to previously reported models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper introduces a context-sensitive model for alignment and analysis of etymological data. Given a raw collection of etymological data (the corpus)-we first aim to find the \"best\" alignment at the sound or symbol level. We take the corpus (or possibly several different corpora) for a language family as given; different data sets are typically conflicting, which creates the need to determine which is more correct. Etymological data sets are found in digital etymological databases, such as ones we use for the Uralic language family. A database is typically organized into cognate sets; all elements within a cognate set are posited (by the database creators) to be derived from a common origin, which is a word-form in the ancestral proto-language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Etymology encompasses several problems, including: discovery of sets of cognatesgenetically related words; determination of genetic relations among groups of languages, based on linguistic data; discovering regular sound correspondences across languages in a given lan-guage family; and reconstruction of forms in the proto-languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Computational methods can provide valuable tools for the etymological community. The methods can be judged by how well they model certain aspects of etymology, and by whether the automatic analysis produces results that match theories established by manual analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we allow all the data-and only the data-to determine what rules underly it, rather than relying on external (and possibly biased) rules that try to explain the data. This approach will provide a means of measuring the quality of the etymological data sets in terms of their internal consistency-a dataset that is more consistent should receive a higher score. We seek methods that analyze the data automatically, in an unsupervised fashion, to determine whether a complete description of the correspondences can be discovered automatically, directly from raw etymological data-cognate sets within the language family. Another way to state the question is: what alignment rules are \"inherently encoded\" in the given corpus itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "At present, our aim is to analyze given etymological datasets, rather than to construct new ones from scratch. Because our main goal is to develop methods that are as objective as possible, the models make no a priori assumptions or \"universal\" principles-e.g., no preference to align vowel with vowels, or a symbol with itself. The models are not aware of the identity of a symbol across languages, and do not try to preserve identity, of symbols, or even of features-rather they try to find maximally regular correspondences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 2 we describe the data used in our experiments, and review approaches to etymological alignment over the last decade. We formalize the problem of alignment in Section 3, give the technical details of our models in Section 4. We present results and discussion in Sections 5 and 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use two large Uralic etymological resources. The StarLing database of Uralic, (Starostin, 2005) , based on (R\u00e9dei, 1988 (R\u00e9dei, 1991 , contains over 2500 cognate sets. Suomen Sanojen Alkuper\u00e4 (SSA), \"The Origin of Finnish Words\", a Finnish etymological dictionary, (Itkonen and Kulonen, 2000) , has over 5000 cognate sets, (about half of which are only in languages from the Balto-Finnic branch, closest to Finnish). Most importantly, for our models, SSA gives \"dictionary\" word-forms, which may contain extraneous morphological material, whereas StarLing data is mostly stemmed.",
"cite_spans": [
{
"start": 81,
"end": 98,
"text": "(Starostin, 2005)",
"ref_id": "BIBREF20"
},
{
"start": 110,
"end": 122,
"text": "(R\u00e9dei, 1988",
"ref_id": null
},
{
"start": 123,
"end": 135,
"text": "(R\u00e9dei, 1991",
"ref_id": null
},
{
"start": 268,
"end": 295,
"text": "(Itkonen and Kulonen, 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Related Work",
"sec_num": "2"
},
{
"text": "One traditional arrangement of the Uralic languages 1 is shown in Figure 1 . We model etymological processes using these Uralic datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 74,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data and Related Work",
"sec_num": "2"
},
{
"text": "The methods in (Kondrak, 2002) learn regular one-to-one sound correspondences between pairs of related languages in the data. The methods in (Kondrak, 2003; Wettig et al., 2011) find more complex (one-to-many) correspondences. These models operate on one language pair at a time; also, they do not model the context of the sound changes, while most etymological changes are conditioned on context. The MCMC-based model proposed in (Bouchard-C\u00f4t\u00e9 et al., 2007) explicitly aims to model the context of changes, and op-1 Adapted from Encyclopedia Britannica and (Anttila, 1989) erates on more than a pair of languages. 2 We should note that our models at present operate at the phonetic level only, they leave semantic judgements of the database creators unquestioned. While other work, e.g. (Kondrak, 2004) , has attempted to approach semantics by computational means as well, our model uses the given cognate set as the fundamental unit. In our work, we do not attempt the problem of discovering cognates, addressed, e.g., in, (Bouchard-C\u00f4t\u00e9 et al., 2007; Kondrak, 2004; Kessler, 2001) . We begin instead with a set of etymological data (or more than one set) for a language family as given. We focus on the principle of recurrent sound correspondence, as in much of the literature, including (Kondrak, 2002; Kondrak, 2003) , and others.",
"cite_spans": [
{
"start": 15,
"end": 30,
"text": "(Kondrak, 2002)",
"ref_id": null
},
{
"start": 141,
"end": 156,
"text": "(Kondrak, 2003;",
"ref_id": "BIBREF10"
},
{
"start": 157,
"end": 177,
"text": "Wettig et al., 2011)",
"ref_id": "BIBREF22"
},
{
"start": 431,
"end": 459,
"text": "(Bouchard-C\u00f4t\u00e9 et al., 2007)",
"ref_id": "BIBREF3"
},
{
"start": 559,
"end": 574,
"text": "(Anttila, 1989)",
"ref_id": "BIBREF0"
},
{
"start": 616,
"end": 617,
"text": "2",
"ref_id": null
},
{
"start": 789,
"end": 804,
"text": "(Kondrak, 2004)",
"ref_id": "BIBREF11"
},
{
"start": 995,
"end": 1054,
"text": "cognates, addressed, e.g., in, (Bouchard-C\u00f4t\u00e9 et al., 2007;",
"ref_id": null
},
{
"start": 1055,
"end": 1069,
"text": "Kondrak, 2004;",
"ref_id": "BIBREF11"
},
{
"start": 1070,
"end": 1084,
"text": "Kessler, 2001)",
"ref_id": "BIBREF9"
},
{
"start": 1292,
"end": 1307,
"text": "(Kondrak, 2002;",
"ref_id": null
},
{
"start": 1308,
"end": 1322,
"text": "Kondrak, 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Related Work",
"sec_num": "2"
},
{
"text": "As we develop our alignment models at the sound or symbol level, in the process of evaluation of these models, we also arrive at modeling the relationships among groups of languages within the family. Construction of phylogenies is studied extensively, e.g., by (Nakhleh et al., 2005; Ringe et al., 2002; Barban\u00e7on et al., 2009) . This work differs from ours in that it operates on manually pre-selected sets of characters, which capture divergent features of languages within the family, whereas we operate on the raw, complete data.",
"cite_spans": [
{
"start": 262,
"end": 284,
"text": "(Nakhleh et al., 2005;",
"ref_id": "BIBREF16"
},
{
"start": 285,
"end": 304,
"text": "Ringe et al., 2002;",
"ref_id": "BIBREF17"
},
{
"start": 305,
"end": 328,
"text": "Barban\u00e7on et al., 2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Related Work",
"sec_num": "2"
},
{
"text": "There is extensive work on alignment in the machine-translation (MT) community, and it has been observed that methods from MT alignment may be projected onto alignment in etymology. The intuition is that translation sentences in MT correspond to cognate words in etymology, while words in MT correspond to sounds in etymology. The notion of regularity of sound change in etymology, which is what our models try to capture, is loosely similar to contextually conditioned correspondence of translation words across languages. For example, (Kondrak, 2002) employs MT alignment from (Melamed, 1997; Melamed, 2000) ; one might employ the IBM models for MT alignment, (Brown et al., 1993) , or the HMM model, (Vogel et al., 1996) . Of the MT-related models, (Bodrumlu et al., 2009) is similar to ours in that it is based on MDL (the Minimum Description Length Principle, introduced below).",
"cite_spans": [
{
"start": 537,
"end": 552,
"text": "(Kondrak, 2002)",
"ref_id": null
},
{
"start": 579,
"end": 594,
"text": "(Melamed, 1997;",
"ref_id": "BIBREF14"
},
{
"start": 595,
"end": 609,
"text": "Melamed, 2000)",
"ref_id": "BIBREF15"
},
{
"start": 648,
"end": 682,
"text": "MT alignment, (Brown et al., 1993)",
"ref_id": null
},
{
"start": 703,
"end": 723,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF21"
},
{
"start": 752,
"end": 775,
"text": "(Bodrumlu et al., 2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Related Work",
"sec_num": "2"
},
{
"text": "We begin with pairwise alignment: aligning pairs of words, from two related languages in our corpus of cognates. For each word pair, the task of alignment means finding exactly which symbols correspond. Some symbols may align with \"themselves\" (i.e., with similar or identical sounds), while others may have undergone changes during the time when the two related languages have been evolving separately. The simplest form of such alignment at the symbol level is a pair (\u03c3 : \u03c4 ) \u2208 \u03a3 \u00d7 T , a single symbol \u03c3 from the source alphabet \u03a3 with a symbol \u03c4 from the target alphabet T . We denote the sizes of the alphabets by |\u03a3| and |T |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aligning Pairs of Words",
"sec_num": "3"
},
{
"text": "To model insertions and deletions, we augment both alphabets with a special empty symboldenoted by a dot-and write the augmented alphabets as \u03a3 . and T . . We can then align word pairs such as vuosi-al (meaning \"year\" in Finnish and Xanty) , for example as any of:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aligning Pairs of Words",
"sec_num": "3"
},
{
"text": "v u o s i | | | | | a l . . . v u o s i | | | | | . a . l . etc...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aligning Pairs of Words",
"sec_num": "3"
},
{
"text": "The alignment on the right then consists of the symbol pairs: (v:.), (u:a), (o:.), (s:l), (i:.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aligning Pairs of Words",
"sec_num": "3"
},
{
"text": "The context-aware alignment method we present here is built upon baseline models published previously, (Wettig et al., 2011) , where we presented several models that do not use phonetic features or context. Similarly to the earlier ones, the current method is based on the Minimum Description Length (MDL) Principle, (Gr\u00fcnwald, 2007) .",
"cite_spans": [
{
"start": 103,
"end": 124,
"text": "(Wettig et al., 2011)",
"ref_id": "BIBREF22"
},
{
"start": 317,
"end": 333,
"text": "(Gr\u00fcnwald, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Model with Phonetic Features",
"sec_num": "4"
},
{
"text": "We begin with a raw set of (observed) datathe not-yet-aligned word pairs. We would like to find an alignment for the data-which we will call the complete data-complete with alignments, that make the most sense globally, in terms of embodying regular correspondences. We are after the regularity, and the more regularity we can find, the \"better\" our alignment will be (its goodness will be defined formally later). MDL tells us that the more regularity we can find in the data, the fewer bits we will need to encode it (or compress it). More regularity means lower entropy in the distribution that describes the data, and lower entropy allows us to construct a more economical code. That is, if we have no knowledge about any regularly of correspondence between symbols, the joint distribution over all possible pairs of symbols will be very flat (high entropy). If we know that certain symbol pairs align frequently, the joint distribution will have spikes, and lower entropy. In (Wettig et al., 2011) we showed how starting with a random alignment a good joint distribution can be learned using MDL. However the \"rules\" those baseline models were able to learn were very rudimentary, since they could not use any information in the context, and we know that many regular correspondences are conditioned by context.",
"cite_spans": [
{
"start": 981,
"end": 1002,
"text": "(Wettig et al., 2011)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Model with Phonetic Features",
"sec_num": "4"
},
{
"text": "We now introduce models that leverage information from the context to try to reduce the uncertainty in the distributions further, lowering the coding cost. To do that, we will code sounds in terms of their phonetic features: rather than coding the symbols (sounds) as atomic, we code them as vectors of phonetic features. Rather than aligning symbol pairs, we align the corresponding features of the symbols. While coding each feature, the model can make use of features of other sounds in its context (environment), through a special decision tree built for that feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Model with Phonetic Features",
"sec_num": "4"
},
{
"text": "We will code each symbol, to be aligned in the complete data, as a feature vector. First we code the Type feature, with values: K (consonant), V (vowel), dot, and word boundary, which we denote as #. Consonants and vowels have their own sets of features, with 2-8 values per feature:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "plosive, fricative, glide, ... P Place labial, dental, ..., velar X Voiced -, + S Secondary -, affricate, aspirate, ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consonant articulation M Manner",
"sec_num": null
},
{
"text": "V Vertical high-low H Horizontal front-back R Rounding -, + L Length 1-5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vowel articulation",
"sec_num": null
},
{
"text": "While coding any symbol, the model will be allowed to query a fixed, finite set of candidate contexts. A context is a triplet (L, P, F ), where L is the level-either source or target,-and P is one of the positions that the model may queryrelative to the position currently being coded; for example, we may allow positions as in Fig. 2 . F is one of the possible features found at that position. Therefore, we will have about 2 levels * 8 positions * 2-6 features \u2248 80 candidate contexts that can be queried by the model, as explained below.",
"cite_spans": [],
"ref_spans": [
{
"start": 328,
"end": 334,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Contexts",
"sec_num": "4.2"
},
{
"text": "I itself, -P previous position -S previous non-dot symbol -K previous consonant -V previous vowel +S previous or self non-dot symbol +K previous or self consonant +V previous or self vowel Figure 2 : An example of a set of possible positions in the context-relative to the position currently being coded-that can be queried by the context model.",
"cite_spans": [],
"ref_spans": [
{
"start": 189,
"end": 197,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Contexts",
"sec_num": "4.2"
},
{
"text": "We code the complete (i.e., aligned) data using a two-part code, following the MDL Principle. We first code which particular model instance we select from our class of models, and then code the data, given the defined model. Our model class is defined as: a set of decision trees (forest), with one tree to predict each feature on each level. The model instance will define the particular structures for each of the trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Two-Part Code",
"sec_num": "4.3"
},
{
"text": "The forest consists of 18 decision trees, one for each feature on the source and the target level: the type feature, 4 vowel and 4 consonant features, times 2 levels. Each node in such tree will either be a leaf, or will be split by querying one of the candidate contexts defined above. The cost of coding the structure of the tree is one bit for every node-to encode whether this node was split (is an internal node) or is a leaf-plus \u2248 log 80 times the number of internal nodes-to encode which particular context was chosen to split that node. We will explain how the best context to split on is chosen in Sec. 4.6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Two-Part Code",
"sec_num": "4.3"
},
{
"text": "Each feature and level define a tree, e.g., the \"voiced\" (X) feature of the source symbols corresponds to the source-X tree. A node N in this tree holds a distribution over the values of X of only those symbol instances in the complete data that have reached in N by following the context queries, starting from the root. The tree structure tells us precisely which path to followcompletely determined by the context. For example, when coding a symbol \u03b1 based on another symbol found in the context of \u03b1-at some level (say, target), some position (say, -K), and one of its features (say, M)-the next edge down the tree is determined by that feature's value; and so on, down to a leaf. For an example of an actual decision tree learned by the model, see Fig. 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 753,
"end": 759,
"text": "Fig. 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "The Two-Part Code",
"sec_num": "4.3"
},
{
"text": "To compute the code length of the complete data, we only need to take into account the distributions at the leaves. We could choose from a variety of coding methods; the crucial point is that the chosen code will assign a particular numberthe cost-to every possible alignment of the data. This code-length, or cost, will then serve as the objective function-i.e., it will be the value that the algorithm will try to optimize. Each reduction in cost will correspond directly to reduction in the entropy of the probability distribution of the symbols, which in turn corresponds to more certainty (i.e., regularity) in the correspondences among the symbols, and to improvement in the alignment. This is the link to our goal, and the reason for introducing code lengths-it gives us a single number that describes the quality of an alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Two-Part Code",
"sec_num": "4.3"
},
{
"text": "We use Normalized Maximum Likelihood (NML), (Rissanen, 1996) as our coding scheme. We choose NML because it has certain optimality properties. Using NML, we code the distribution at each leaf node separately, and summing the costs of all leaves gives the total cost of the aligned data-the value of our objective function.",
"cite_spans": [
{
"start": 44,
"end": 60,
"text": "(Rissanen, 1996)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Two-Part Code",
"sec_num": "4.3"
},
{
"text": "Suppose n instances end up in a leaf node N , of the \u03bb-level tree, for feature F having k values (e.g., consonants satisfying N 's context constraints in the source-X tree, with k = 2 values: \u2212 and +), and the values are distributed so that n i instances have value i (with i \u2208 {1, . . . , k}). Then this requires an NML code-length of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Two-Part Code",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L N M L (\u03bb; F ; N ) = \u2212 log P N M L (\u03bb; F ; N ) = \u2212 log i n i n n i C(n, k)",
"eq_num": "(1)"
}
],
"section": "The Two-Part Code",
"sec_num": "4.3"
},
{
"text": "Here i n i n n i is the maximum likelihood of the multinomial data at node N , and the term",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Two-Part Code",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C(n, k) = n 1 +...+n k =n i n i n n i",
"eq_num": "(2)"
}
],
"section": "The Two-Part Code",
"sec_num": "4.3"
},
{
"text": "is a normalizing constant to make P N M L a probability distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Two-Part Code",
"sec_num": "4.3"
},
{
"text": "In the MDL literature, e.g., (Gr\u00fcnwald, 2007) , the term \u2212 log C(n, k) is called the stochastic complexity or the (minimax) regret of the model, (in this case, the multinomial model). The NML distribution provides the unique solution to the minimax problem posed in (Shtarkov, 1987) ,",
"cite_spans": [
{
"start": 29,
"end": 45,
"text": "(Gr\u00fcnwald, 2007)",
"ref_id": "BIBREF5"
},
{
"start": 266,
"end": 282,
"text": "(Shtarkov, 1987)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Two-Part Code",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min P max x n log P (x n |\u0398(x n )) P (x n )",
"eq_num": "(3)"
}
],
"section": "The Two-Part Code",
"sec_num": "4.3"
},
{
"text": "where\u0398(x n ) = arg max \u0398 P(x n ) are the maximum likelihood parameters for the data x n . Thus, P N M L minimizes the worst-case regret, i.e., the number of excess bits in the code as compared to the best model in the model class, with hind-sight. For details on the computation of this code length see (Kontkanen and Myllym\u00e4ki, 2007) . Learning the model from the observed data now means aligning the word pairs and building the decision trees in such a way as to minimize the two-part code length: the sum of the model's code length-to encode the structure of the trees,and the data's code length-to encode the aligned word pairs, using these trees.",
"cite_spans": [
{
"start": 303,
"end": 334,
"text": "(Kontkanen and Myllym\u00e4ki, 2007)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Two-Part Code",
"sec_num": "4.3"
},
{
"text": "The full learning algorithm runs as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of the Algorithm",
"sec_num": "4.4"
},
{
"text": "We start with an initial random alignment for each pair of words in the corpus, i.e., for each word pair choose some random path through the matrix depicted in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Summary of the Algorithm",
"sec_num": "4.4"
},
{
"text": "From then on we alternate between two steps: A. re-build the decision trees for all features on source and target levels, and B. re-align all word pairs in the corpus. Both of these operations monotonically decrease the two-part cost function and thus compress the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of the Algorithm",
"sec_num": "4.4"
},
{
"text": "We continue until we reach convergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of the Algorithm",
"sec_num": "4.4"
},
{
"text": "To align source word \u03c3 consisting of symbols \u03c3 = [\u03c3 1 ...\u03c3 n ], \u03c3 \u2208 \u03a3 * with target word \u03c4 = [\u03c4 1 ...\u03c4 m ] we use dynamic programming. The tree structures are considered fixed, as are the alignments of all word pairs, except the one currently being aligned-which is subtracted from the counts stored at the leaf nodes. We now fill the matrix V , left-to-right, top-tobottom. Every possible alignment of \u03c3 and \u03c4 cor- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-alignment Procedure",
"sec_num": "4.5"
},
{
"text": "V (i, j) = min \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 V (i, j \u2212 1) +L(. : \u03c4 j ) V (i \u2212 1, j) +L(\u03c3 i : .) V (i \u2212 1, j \u2212 1) +L(\u03c3 i : \u03c4 j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-alignment Procedure",
"sec_num": "4.5"
},
{
"text": "Each term V (\u2022, \u2022) has been computed earlier by the dynamic programming; the term L(\u2022)-the cost of aligning the two symbols, inserting or deleting-is determined by the change in data code length it induces to add this event to the corresponding leaf in all the feature trees it concerns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-alignment Procedure",
"sec_num": "4.5"
},
{
"text": "In particular, the cost of the most probable complete alignment of the two words will be stored in the bottom-right cell, V (n, m), marked .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-alignment Procedure",
"sec_num": "4.5"
},
{
"text": "Given a complete alignment of the data, we need to build a decision tree, for each feature on both levels, yielding the lowest two-part cost. The term \"decision tree\" is meant in a probabilistic sense here: instead of a single value, at each node we store a distribution of the corresponding feature values, over all instances that reach this node. The distribution at a leaf is then used to code an instance when it reaches the leaf in question. We code the features in some fixed, pre-set order, and source level before target level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Decision Trees",
"sec_num": "4.6"
},
{
"text": "We now describe in detail the process of building the tree for feature X, for the source level, (we will need do the same for all other features, on both levels, as well). We build this tree as follows. First, we collect all instances of consonants on the source level, and gather the the counts for feature X; and build an initial count vector; suppose it is: value of X:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Decision Trees",
"sec_num": "4.6"
},
{
"text": "+ - 1001 1002",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Decision Trees",
"sec_num": "4.6"
},
{
"text": "This vector is stored at the root of the tree; the cost of this node is computed using NML, eq. 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Decision Trees",
"sec_num": "4.6"
},
{
"text": "Next, we try to split this node, by finding such a context that if we query the values of the feature in that context, it will help us reduce the entropy in this count vector. We check in turn all possible candidate contexts, (L, P, F ), and choose the best one. Each candidate refers to some symbol found on the source (\u03c3) or the target (\u03c4 ) level, at some relative position P , and to one of that symbol's features F . We will condition the split on the possible values of F . For each candidate, we try to split on its feature's values, and collect the resulting alignment counts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Decision Trees",
"sec_num": "4.6"
},
{
"text": "Suppose one such candidate is (\u03c3, -V, H), i.e., (source-level, previous vowel, Horizontal feature), and suppose that the H-feature has two values: front/back. The vector at the root node (recall, this tree is for the X-feature) would then split into two vectors, e.g.: value of X:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Decision Trees",
"sec_num": "4.6"
},
{
"text": "+ - X | H=front 1000 1 X | H=back 1 1001",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Decision Trees",
"sec_num": "4.6"
},
{
"text": "This would likely be a very good split, since it reduces the entropy of the distribution in each row almost to zero. The criterion that guides the choice of the best candidate to use for splitting a node is the sum of the code lengths of the resulting split vectors, and the code length is proportional to the entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Decision Trees",
"sec_num": "4.6"
},
{
"text": "We go through all candidates exhaustively, and greedily choose the one that yields the greatest reduction in entropy, and drop in cost. We proceed recursively down the tree, trying to split nodes, and stop when the total tree cost stops decreasing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Decision Trees",
"sec_num": "4.6"
},
{
"text": "This completes the tree for feature X on level \u03c3. We build trees for all features and levels similarly, from the current alignment of the complete data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Decision Trees",
"sec_num": "4.6"
},
{
"text": "We augment the set of possible values at every node with two additional special branches: =, meaning the symbol at the queried position is of the wrong type and does not have the queried feature, and #, meaning the query ran past the beginning of the word. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Decision Trees",
"sec_num": "4.6"
},
{
"text": "One way to evaluate the presented models would require a gold-standard aligned corpus; the models produce alignments which could be compared to the gold-standard alignments, and we could measure performance quantitatively, e.g., in terms of accuracy. However, building a gold-standard aligned corpus for the Uralic data proved to be extremely difficult. In fact, it quickly becomes clear that this problem is at least as difficult as building a full reconstruction for all internal nodes in the family tree (and probably harder), since it requires full knowledge of all sound correspondences within the family. It is also compounded by the problem that the word-forms in the corpus may contain morphological material that is etymologically unrelated: some databases give \"dictionary\" forms, which contain extraneous affixes, and thereby obscure which parts of a given word form stand in etymological relationship with other members in the cognates set, and which do not. We therefore introduce other methods to evaluate the models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "5"
},
{
"text": "Compression: In figure 4, we compare the context model, and use as baselines the standard data compressors, Gzip and Bzip, as well as the more basic models presented in (Wettig et al., 2011) , (labeled \"1x1 and \"2x2\"). We test the compression of up to 3200 Finnish-Estonian word pairs, from SSA. Gzip by finding regularities in it (i.e., frequent substrings). The comparison with Gzip is a \"sanity check\": we would like to confirm whether our models find more regularity in the data than would an off-the-shelf data compressor, that has no knowledge that the words in the data are etymologically related. Of course, our models know that they should align pairs of consecutive lines. This test shows that learning about the \"vertical\" correspondences achieves much better compression rates-allows the models to extract greater regularity from the data. Rules of correspondence: One our main goals is to model rules of correspondence among languages. We can evaluate the models based on how good they are at discovering rules. (Wettig et al., 2011) showed that aligning multiple symbols captures some of the context and thereby finds more complex rules than their 1-1 alignment model. However, certain alignments, such as t\u223ct/d, p\u223cp/b, and k\u223ck/g between Finnish and Estonian, cannot be explained by the multiple-symbol model. This is due to the rule of voicing of word-medial plosives in Estonian. This rule could be expressed in terms of Two-level Morphology, (Koskenniemi, 1983) as: a voiceless plosive in Finnish, may correspond to voiced in Estonian, if not word-initial. 3 The context model finds this rule, shown in Fig. 5 . This tree codes the Target-level (i.e., Estonian) Voiced consonant feature. In each node, the counts of corresponding feature values are shown in brackets. In the root node-prior to knowing anything about the environment-there is almost complete uncertainty (i.e., high entropy) about the value of Voiced feature of an Estonian consonant: 821 voiceless to 801 voiced in our data. Redder nodes indicate higher entropy, bluer nodes-lower entropy. The query in the root node tells us to check the context Finnish Itself Voiced for the most informative clue about whether the current Estonian consonant is voiced or not. Tracing the options down left to right from the root, we obtain the rules. The leftmost branch says, if the Finnish is voiced (\u2295), then the Estonian is almost certainly voiced as well-615 voiced to 2 voiceless in this case. If the Finnish is voiceless (Finnish Itself Voiced = ), it says voicing may occur, but only in the red nodes-i.e., only if preceded by a voiced consonant on Estonian level (the branch marked by \u2295, 56 cases), or-if previous position is not a consonant (the = branch indicates that the candidate's query does not apply: i.e., the sound found in that position is not a consonant)it can be voiced only if the corresponding Finnish is a plosive (P, 78 cases). The blue nodes in this branch say that otherwise, the Estonian consonant almost certainly remains voiceless.",
"cite_spans": [
{
"start": 169,
"end": 190,
"text": "(Wettig et al., 2011)",
"ref_id": "BIBREF22"
},
{
"start": 296,
"end": 300,
"text": "Gzip",
"ref_id": null
},
{
"start": 1025,
"end": 1046,
"text": "(Wettig et al., 2011)",
"ref_id": "BIBREF22"
},
{
"start": 1459,
"end": 1478,
"text": "(Koskenniemi, 1983)",
"ref_id": "BIBREF13"
},
{
"start": 1574,
"end": 1575,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1620,
"end": 1626,
"text": "Fig. 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "5"
},
{
"text": "The context models discover numerous complex rules for different language pairs. For example, they learn a rule that initial Finnish k \"changes\" (corresponds) to h in Hungarian, if it is followed by a back vowel; the correspondence between Komi trills and Udmurt sibilants; etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "5"
},
{
"text": "Imputation: We introduce a novel test of the quality of the models, by using them to impute unseen data, as follows. For a given model, and a language pair (L 1 , L 2 )-e.g., (Finnish, Estonian)-hold out one word pair, and train the model on the remaining data. Then show the model the hidden Finnish word and let it guess the corresponding Estonian. Imputation can be done for all models with a simple dynamic programming algorithm, similar to the Viterbi-like search used during training. Formally, given the hidden Finnish string, the imputation procedure selects from all possible Estonian strings the most probable Estonian string, given the model. We then compute an edit distance between the imputed sting and the true withheld Estonian word (e.g., using the Levenshtein distance). We repeat this procedure for all word pairs in the (L 1 , L 2 ) data set, sum the edit distances and normalize by the total size of the (true) L 2 data-this yields the Normalized Edit Distance N ED(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "5"
},
{
"text": "L 2 |L 1 , M ) be- tween L 1 and L 2 , under model M .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "5"
},
{
"text": "Imputation is a more intuitive measure of the model's quality than code length, with a clear practical interpretation. NED is also the ultimate test of the model's quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "5"
},
{
"text": "If model M imputes better than M -i.e., N ED(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "5"
},
{
"text": "L 2 |L 1 , M ) < N ED(L 2 |L 1 , M )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "5"
},
{
"text": "-then it is difficult to argue that M could be in any sense \"worse\" than Mit has learned more about the regularities between L 1 and L 2 , and it knows more about L 2 given L 1 . The context model, which has much lower cost than the baseline, almost always has lower NED. This also yields an important insight: it is an encouraging indication that optimizing the code length is a good approach-the algorithm does not optimize NED directly, and yet the cost correlates strongly with NED, which is a simple and intuitive measure of the model's quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "5"
},
{
"text": "We have presented a novel feature-based contextaware MDL model, and a comparison of its performance against prior models for the task of alignment of etymological data. We have evaluated the models by examining the the rules of correspondence that they discovers, by comparing compression cost, imputation power and language distances induced by the imputation. The models take only the etymological data set as input, and require no further linguistic assumptions. In this regard, they is as objective as possible, given the data. The data set itself, of course, may be highly subjective and questionable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The objectivity of models given the data now opens new possibilities for comparing entire data sets. For example, we can begin to compare the Finnish and Estonian datasets in SSA vs. Star-Ling, although the data sets have quite different characteristics, e.g., different size-3200 vs. 800 word pairs, respectively-and the comparison is done impartially, relying solely on the data provided. Another direct consequence of the presented methods is that they enable us to quantify uncertainty of entries in the corpus of etymological data. For example, for a given entry x in language L 1 , we can compute exactly the probability that x would be imputed by any of the models, trained on all the remaining data from L 1 plus any other set of languages in the family. This can be applied equally to any entry, in particular to entries marked dubious by the database creators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We can use this method to approach the question of comparison of \"competing\" etymological datasets. The cost of an optimal alignment obtained over a given data set serves as a measure of its internal consistency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We are currently working to combine the context model with 3-and higher-dimensional models, and to extend these models to perform diachronic imputation, i.e., reconstruction of protoforms. We also intend to test the models on databases of other language families.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Using this method, we found that the running time did not scale well for more than three languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In fact, phonetically, in modern spoken Estonian, the consonants that are written using the symbols b,d,g are not technically voiced, but that is a finer point, we use this rule for illustration of the principle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are very grateful to the anonymous reviewers for their thoughtful and helpful comments. We thank Suvi Hiltunen for the implementation of the models, and Arto Vihavainen for implementing some of the earlier models. This research was supported by the Uralink Project, funded by the Academy of Finland and by the Russian Fund for the Humanities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Historical and comparative linguistics",
"authors": [
{
"first": "Raimo",
"middle": [],
"last": "Anttila",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raimo Anttila. 1989. Historical and comparative lin- guistics. John Benjamins.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An experimental study comparing linguistic phylogenetic reconstruction methods",
"authors": [
{
"first": "G",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": ""
},
{
"first": "Tandy",
"middle": [],
"last": "Barban\u00e7on",
"suffix": ""
},
{
"first": "Don",
"middle": [],
"last": "Warnow",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"N"
],
"last": "Ringe",
"suffix": ""
},
{
"first": "Luay",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nakhleh",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Conference on Languages and Genes, UC Santa Barbara",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois G. Barban\u00e7on, Tandy Warnow, Don Ringe, Steven N. Evans, and Luay Nakhleh. 2009. An ex- perimental study comparing linguistic phylogenetic reconstruction methods. In Proceedings of the Con- ference on Languages and Genes, UC Santa Bar- bara. Cambridge University Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A new objective function for word alignment",
"authors": [
{
"first": "Tugba",
"middle": [],
"last": "Bodrumlu",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Sujith",
"middle": [],
"last": "Ravi",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. NAACL Workshop on Integer Linear Programming for NLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tugba Bodrumlu, Kevin Knight, and Sujith Ravi. 2009. A new objective function for word alignment. In Proc. NAACL Workshop on Integer Linear Pro- gramming for NLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A probabilistic approach to diachronic phonology",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Bouchard-C\u00f4t\u00e9",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "887--896",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre Bouchard-C\u00f4t\u00e9, Percy Liang, Thomas Grif- fiths, and Dan Klein. 2007. A probabilistic ap- proach to diachronic phonology. In Proceedings of the 2007 Joint Conference on Empirical Meth- ods in Natural Language Processing and Com- putational Natural Language Learning (EMNLP- CoNLL), pages 887-896, Prague, June.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [
"Della"
],
"last": "Vincent",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert. L. Mercer. 1993. The mathematics of statistical machine translation: Pa- rameter estimation. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The Minimum Description Length Principle",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Gr\u00fcnwald",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Gr\u00fcnwald. 2007. The Minimum Description Length Principle. MIT Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Suomen Sanojen Alkuper\u00e4 (The Origin of Finnish Words)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suomen Sanojen Alkuper\u00e4 (The Origin of Finnish Words).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Determining recurrent sound correspondences by inducing translation models",
"authors": [
{
"first": "Brett",
"middle": [],
"last": "Kessler",
"suffix": ""
}
],
"year": 2001,
"venue": "The Significance of Word Lists: Statistical Tests for Investigating Historical Connections Between Languages",
"volume": "",
"issue": "",
"pages": "488--494",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brett Kessler. 2001. The Significance of Word Lists: Statistical Tests for Investigating Historical Con- nections Between Languages. The University of Chicago Press, Stanford, CA. Grzegorz Kondrak. 2002. Determining recur- rent sound correspondences by inducing translation models. In Proceedings of COLING 2002: 19th In- ternational Conference on Computational Linguis- tics, pages 488-494, Taipei, August.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Identifying complex sound correspondences in bilingual wordlists",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics and Intelligent Text Processing (CICLing-2003)",
"volume": "",
"issue": "",
"pages": "432--443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak. 2003. Identifying complex sound correspondences in bilingual wordlists. In A. Gel- bukh, editor, Computational Linguistics and Intel- ligent Text Processing (CICLing-2003), pages 432- 443, Mexico City, February. Springer-Verlag Lec- ture Notes in Computer Science, No. 2588.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Combining evidence in cognate identification",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Seventeenth Canadian Conference on Artificial Intelligence (Canadian AI 2004)",
"volume": "3060",
"issue": "",
"pages": "44--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak. 2004. Combining evidence in cognate identification. In Proceedings of the Sev- enteenth Canadian Conference on Artificial Intelli- gence (Canadian AI 2004), pages 44-59, London, Ontario, May. Lecture Notes in Computer Science 3060, Springer-Verlag.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A linear-time algorithm for computing the multinomial stochastic complexity",
"authors": [
{
"first": "Petri",
"middle": [],
"last": "Kontkanen",
"suffix": ""
},
{
"first": "Petri",
"middle": [],
"last": "Myllym\u00e4ki",
"suffix": ""
}
],
"year": 2007,
"venue": "Information Processing Letters",
"volume": "103",
"issue": "6",
"pages": "227--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petri Kontkanen and Petri Myllym\u00e4ki. 2007. A linear-time algorithm for computing the multino- mial stochastic complexity. Information Processing Letters, 103(6):227-233.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Two-level morphology: A general computational model for word-form recognition and production",
"authors": [
{
"first": "Kimmo",
"middle": [],
"last": "Koskenniemi",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kimmo Koskenniemi. 1983. Two-level morphol- ogy: A general computational model for word-form recognition and production. Ph.D. thesis, Univer- sity of Helsinki, Finland.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic discovery of noncompositional compounds in parallel data",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 1997,
"venue": "The Second Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "97--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Melamed. 1997. Automatic discovery of non- compositional compounds in parallel data. In The Second Conference on Empirical Methods in Nat- ural Language Processing, pages 97-108, Hissar, Bulgaria.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Models of translational equivalence among words",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "2",
"pages": "221--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Melamed. 2000. Models of translational equiv- alence among words. Computational Linguistics, 26(2):221-249.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Perfect phylogenetic networks: A new methodology for reconstructing the evolutionary history of natural languages",
"authors": [
{
"first": "Luay",
"middle": [],
"last": "Nakhleh",
"suffix": ""
},
{
"first": "Don",
"middle": [],
"last": "Ringe",
"suffix": ""
},
{
"first": "Tandy",
"middle": [],
"last": "Warnow",
"suffix": ""
}
],
"year": 2005,
"venue": "Language (Journal of the Linguistic Society of America)",
"volume": "81",
"issue": "2",
"pages": "382--420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luay Nakhleh, Don Ringe, and Tandy Warnow. 2005. Perfect phylogenetic networks: A new methodol- ogy for reconstructing the evolutionary history of natural languages. Language (Journal of the Lin- guistic Society of America), 81(2):382-420.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Indo-European and computational cladistics",
"authors": [
{
"first": "K\u00e1roly",
"middle": [],
"last": "R\u00e9dei",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Harrassowitz",
"suffix": ""
},
{
"first": "Wiesbaden",
"middle": [
"Don"
],
"last": "Ringe",
"suffix": ""
},
{
"first": "Tandy",
"middle": [],
"last": "Warnow",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 1988,
"venue": "Transactions of the Philological Society",
"volume": "100",
"issue": "1",
"pages": "59--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K\u00e1roly R\u00e9dei. 1988-1991. Uralisches etymologisches W\u00f6rterbuch. Harrassowitz, Wiesbaden. Don Ringe, Tandy Warnow, and A. Taylor. 2002. Indo-European and computational cladis- tics. Transactions of the Philological Society, 100(1):59-129.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Fisher information and stochastic complexity",
"authors": [
{
"first": "Jorma",
"middle": [],
"last": "Rissanen",
"suffix": ""
}
],
"year": 1996,
"venue": "IEEE Transactions on Information Theory",
"volume": "42",
"issue": "1",
"pages": "40--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jorma Rissanen. 1996. Fisher information and stochastic complexity. IEEE Transactions on Infor- mation Theory, 42(1):40-47, January.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Universal sequential coding of single messages",
"authors": [
{
"first": "Yuri",
"middle": [
"M"
],
"last": "Shtarkov",
"suffix": ""
}
],
"year": 1987,
"venue": "Problems of Information Transmission",
"volume": "23",
"issue": "",
"pages": "3--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuri M. Shtarkov. 1987. Universal sequential coding of single messages. Problems of Information Trans- mission, 23:3-17.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Tower of babel: Etymological databases",
"authors": [
{
"first": "Sergei",
"middle": [
"A"
],
"last": "Starostin",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergei A. Starostin. 2005. Tower of babel: Etymolog- ical databases. http://newstar.rinet.ru/.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "HMM-based word alignment in statistical translation",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of 16th Conference on Computational Linguistics (COLING 96)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Vogel, Hermann Ney, and Christoph Till- mann. 1996. HMM-based word alignment in sta- tistical translation. In Proceedings of 16th Confer- ence on Computational Linguistics (COLING 96), Copenhagen, Denmark, August.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "MDL-based Models for Alignment of Etymological Data",
"authors": [
{
"first": "Hannes",
"middle": [],
"last": "Wettig",
"suffix": ""
},
{
"first": "Suvi",
"middle": [],
"last": "Hiltunen",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Yangarber",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of RANLP: the 8th Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hannes Wettig, Suvi Hiltunen, and Roman Yangarber. 2011. MDL-based Models for Alignment of Et- ymological Data. In Proceedings of RANLP: the 8th Conference on Recent Advances in Natural Lan- guage Processing, Hissar, Bulgaria.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Finno-Ugric branch of Uralic language family (the data used in the experiments in this paper)",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "Dynamic programming matrix V, to search for the most probable alignment responds to exactly one path through this matrix: starting with cost equal to 0 in the top-left cell, moving only downward or rightward, and terminating in the bottom-right cell. In this Viterbi-like matrix, every cell corresponds to a partially completed alignment: reaching cell (i, j) means having read off i symbols of the source word and j symbols of the target. Each cell V (i, j)-marked X in theFigure-stores the cost of the most probable path so far: the most probable way to have scanned \u03c3 through symbol \u03c3 i and \u03c4 through \u03c4 j :",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "Comparison of compression power: Finnish-Estonian data from SSA, using the context model vs. the baseline models and standard compressors.",
"type_str": "figure",
"num": null
},
"FIGREF3": {
"uris": null,
"text": "Part of a tree, showing the rule for voicing of medial plosives in Estonian, conditioned on Finnish.",
"type_str": "figure",
"num": null
},
"TABREF1": {
"text": "",
"html": null,
"type_str": "table",
"content": "<table><tr><td>: Pairwise normalized edit distances for Finno-</td></tr><tr><td>Ugric languages, on StarLing data (symmetrized by</td></tr><tr><td>averaging over the two directions of imputation).</td></tr></table>",
"num": null
}
}
}
}