| { |
| "paper_id": "2022", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:21:24.297864Z" |
| }, |
| "title": "Bayesian Phylogenetic Cognate Prediction", |
| "authors": [ |
| { |
| "first": "Gerhard", |
| "middle": [], |
| "last": "J\u00e4ger", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of T\u00fcbingen Seminar f\u00fcr Sprachwissenschaft", |
| "location": {} |
| }, |
| "email": "gerhard.jaeger@uni-tuebingen.de" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In J\u00e4ger (2019) a computational framework was defined to start from parallel word lists of related languages and infer the corresponding vocabulary of the shared proto-language. The SIGTYP 2022 Shared Task is closely related. The main difference is that what is to be reconstructed is not the proto-form but an unknown word from an extant language. The system described here is a re-implementation of the tools used in the mentioned paper, adapted to the current task.", |
| "pdf_parse": { |
| "paper_id": "2022", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In J\u00e4ger (2019) a computational framework was defined to start from parallel word lists of related languages and infer the corresponding vocabulary of the shared proto-language. The SIGTYP 2022 Shared Task is closely related. The main difference is that what is to be reconstructed is not the proto-form but an unknown word from an extant language. The system described here is a re-implementation of the tools used in the mentioned paper, adapted to the current task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In J\u00e4ger (2019) I presented a pilot study of a computational historical linguistics workflow. Starting from parallel word lists (taken from Wichmann et al. 2016) of 29 Romance languages and dialects, covering 40 core concepts, it produced reconstructions of the Proto-Romance words for the same concepts.", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 161, |
| "text": "Wichmann et al. 2016)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The intermediate steps of this workflow are 1. for each concept, cluster the corresponding sound strings into cognate classes, 2. infer a posterior distribution of phylogenies of the covered doculects using Bayesian inference, 3. apply Bayesian inference to identify the maximum a posteriori cognate class at the root of the tree for each concept (ancestral state reconstruction, ASR), 4. apply multiple sequence alignment (MSA) to the words of each cognate class, 5. apply ASR to each alignment column of the MSAs of the cognate classes identified in step 3; gaps are treated as regular characters, and 6. concatenate the reconstructions and removing gaps.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The result turned out to be an imperfect but reasonable approximations of the attested Latin wordlist.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The SIGTYP 2022 Shared Task on the Prediction of Cognate Reflexes (https://github.com/ sigtyp/ST2022, List et al. 2022 ) is very similar in nature. The system described here is an adaptation of J\u00e4ger's (2019) workflow to this task.", |
| "cite_spans": [ |
| { |
| "start": 102, |
| "end": 118, |
| "text": "List et al. 2022", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The authors of the Shared Task made parallel word lists for 20 language families available. For details of the provinence of the data and the pre-processing steps performed, see List et al. (2022) . Each dataset comprises between four and 19 related languages, and between 500 and ca. 10,000 words. Words are classified according to cognate classes, which are based either on expert judgments or are inferred via automatic cognate detection. No information about the meanings of the words are available for training or inference. All words are transcribed in IPA and tokenized.", |
| "cite_spans": [ |
| { |
| "start": 178, |
| "end": 196, |
| "text": "List et al. (2022)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The data are arranged in a table with cognate classes as rows and languages as columns. In Table 1 , a small part from the dataset kesslersignificance (based on Kessler 2001) is shown for illustration.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 91, |
| "end": 98, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Each dataset was split into a training set and a test set. The proportion of test data was varied between 10%, 20%, 30%, 40% and 50%, leading to a total of 50 datasets, each consisting of a training and a test set. For the test data, one word per row was masked, using each attested word for masking in turn. The task is to predict the masked words from the other cognates in the same row. Table 2 contains an example row from such a test set. The task is to infer the French word which is cognate to Albanian piski, English fIS, German fiS and Latin piski. In a separate file which is only to be used for evaluation, the correct solution -pS in this case -is given.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 390, |
| "end": 397, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "2" |
| }, |
| { |
| "text": "English French German Latin 920 h A r t k oe r h e r ts @ n k o r d 1083", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "COGID Albanian", |
| "sec_num": null |
| }, |
| { |
| "text": "h O r n k O r n h o r n k o r n u: 1150 S k u r t @ r S O r t k u r t k u r ts For each of the 50 datasets, a system can be trained using the complete training set. For prediction, the trained system only \"sees\" one row of the test data and has to predict the masked word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "COGID Albanian", |
| "sec_num": null |
| }, |
| { |
| "text": "This task differs from the one described in (J\u00e4ger, 2019) mainly by the fact that not some ancestral word form has to be inferred but a word from an extant language. For the particular inference methods used, this difference is actually inessential, since it is based on a time-reversible model of language change.", |
| "cite_spans": [ |
| { |
| "start": 44, |
| "end": 57, |
| "text": "(J\u00e4ger, 2019)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The first step of the workflow by J\u00e4ger (2019), identifying cognate classes, has already been performed here. This led to the following workflow:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "1. Train a pair-hidden Markov model (pHMM; see Durbin et al. 1989) for pairwise string alignment. 1 2. Infer a preliminary phylogenetic tree via UP-GMA (Sokal and Michener, 1958) .", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 66, |
| "text": "Durbin et al. 1989)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 152, |
| "end": 178, |
| "text": "(Sokal and Michener, 1958)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "3. Perform MSA per cognate class using the T-Coffee algorithm (Notredame et al., 2000) .", |
| "cite_spans": [ |
| { |
| "start": 62, |
| "end": 86, |
| "text": "(Notredame et al., 2000)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "4. Join all MSA matrices and use this as character matrix for Bayesian phylogenetic inference. 6. Apply MSA to the non-masked entries in the test row using the model trained in steps 1 and 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "7. Find the maximum a-posteriori state for each MSA column for the masked entry, using the posterior distributions inferred in steps 4 and 5 as priors. Concatenate the states inferred in the previous step and remove gap symbols.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Each of these steps will be briefly explained in the following subsections.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "A pair-Hidden Markov Model (pHMM) is a Hidden Markov model with two parallel output tapes. In each state, the model may emit a symbol on the first, the second or on both tapes. The architecture used here is taken from Durbin et al. (1989) and schematically displayed in Figure 1 . The state M is the match state, where the model simultaneously emits one symbol on each tape. In state X only a symbol to the first tape is emitted, and likewise for state Y and the second tape. When the model reaches the end state (where no symbol is emitted), each tape contains a symbol sequence. The joint probability of this sequence and the simultaneous sequence of hidden states is determined by the product of the transition and emission probabilities used.", |
| "cite_spans": [ |
| { |
| "start": 218, |
| "end": 238, |
| "text": "Durbin et al. (1989)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 270, |
| "end": 278, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training a Pair-Hidden Markov Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "M X Y end 1-2\u03b4 -\u03c4 \u03c4 \u03c4 \u03c4 \u03b4 \u03b4 \u03f5 \u03f5 1-\u03f5-\u03c4 1-\u03f5-\u03c4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Pair-Hidden Markov Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Crucially, the sequence of hidden states of one pass of the model determines a pairwise alignment of the strings produced. M identifies a match column, X a column with a gap in the second string, and Y a gap in the first string. If the parameters of the model are known, the maximum likelihood alignment between two strings can be found using the Viterbi algorithm. 3 It is assumed that the alphabet from which the words are constructed are known in advance. The parameters of the model are the transition probabilities \u03b4, \u03f5 and \u03c4 , and the emission probabilities for each state. For state M, this is a probability distribution over pairs of symbols from the alphabet. I assume the emission probabilites for states X and Y to be identical; both are a probability distribution over the alphabet.", |
| "cite_spans": [ |
| { |
| "start": 366, |
| "end": 367, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Pair-Hidden Markov Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Given a training set of pairs of strings, parameters of the model can be estimated using the Baum-Welch algorithm, an incarnation of the EM algorithm. If values for all parameters of the model are given, the frequency of all transitions and all emissions for a given set of string pairs are estimated (expectation step). The conditional relative frequencies for each transition and emission are then used as new parameter values (maximization step). This procedure is repeated many times, starting from an arbitrary initial state.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Pair-Hidden Markov Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In the system described here, the pHMM was initialized with transition probabilities \u03b4 = \u03c4 = 0.25, \u03f5 = 0.375. The initial emission probabilities at the gap states X and Y are uniform distributions. The emission probabilities in the match state M are", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Pair-Hidden Markov Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "p(a, b) \u221d 1 if a \u0338 = b p(a, a) \u221d |alphabet| + 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Pair-Hidden Markov Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "These choices are motivated by the idea that Viterbi alignment in the initial state should approximate Levenshtein alignment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Pair-Hidden Markov Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For training and MSA, all training strings (and later test strings) were converted into the ASJP alphabet (Brown et al., 2008) , which comprises just 41 sound classes, to keep the number of parameters to be estimated manageable. 4 The conversion was performed using the software package LingPy (List and Forkel, 2021) . Training word pairs, i.e., all pairs of cognate words from the training set, were arranged in random order and split into minibatches of size 20. An EM step was performed for each mini-batch. This procedure was repeated for two epochs over all mini-batches.", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 126, |
| "text": "(Brown et al., 2008)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 229, |
| "end": 230, |
| "text": "4", |
| "ref_id": null |
| }, |
| { |
| "start": 294, |
| "end": 317, |
| "text": "(List and Forkel, 2021)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Pair-Hidden Markov Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "As preparation for multiple sequence alignment, a guide tree over the languages is required. For this purpose, the pairwise normalized Levenshtein distance (i.e., the edit distance divided by the length of the longer string) was computed between any pair of cognate words. The distance between two languages was then computed as the average word distance between any two cognate words from these languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "UPGMA Tree", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The resulting pairwise language distances were used as input for the UPGMA algorithm to infer a language tree. E.g., for the dataset kesslersignificance with 10% test data, the resulting tree has the topology ((Latin, (French, Albanian)), (English, German)).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "UPGMA Tree", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "This topology is evidently not perfect (Albanian having the wrong location), but the next step, while requiring a guide tree, is not very sensitive to the specific tree topology.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "UPGMA Tree", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The alignment method described in Subsection 3.1 above is only capable of performing pairwise sequence alignment. Modifying it to multiple strings would require to increase the number of states, and concommittantly computation time, exponentially in the number of sequences. The T-Coffee method of multiple sequence alignment (Notredame et al., 2000) represents a compromise combining good results with computational efficiency.", |
| "cite_spans": [ |
| { |
| "start": 326, |
| "end": 350, |
| "text": "(Notredame et al., 2000)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multiple Sequence Alignment", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "To compute an MSA for a group of words, first all pairs of words are aligned pairwise. For this step, I used Viterbi alignment with the pHMM parameters described in Subsection 3.1. During the next step of T-Coffee, all threefold alignments are computed simply by combining two pairwise alignments from the previous step. The alignment scores between any pair of symbol tokens are obtained by counting all threefold alignments where these symbols occur in the first and last column, weighted by the Hamming similarity between the entire first and last row.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multiple Sequence Alignment", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Using these scores, progressive alignment (Feng and Doolittle, 1987) is performed using a guide tree.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multiple Sequence Alignment", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "To continue the example mentioned above, the MSA covering the first row of Table 1 comes out as in Table 3 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 75, |
| "end": 82, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 99, |
| "end": 106, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Multiple Sequence Alignment", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Albanian ------ English h o r t -- French k E r --- German", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multiple Sequence Alignment", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "h e r C I n Latin k o r d -- ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multiple Sequence Alignment", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The MSAs for the training data thus obtained were used to perform more sophisticated, Bayesian phylogenetic inference. For this purpose each symbol in the MSA is replaced by the corresponding Dolgopolsky class (Dolgopolsky, 1986) . This conversion was performed using LingPy (List and Forkel, 2021) as well. For each alignment column, the symbols in this column are conveived of as states of a continuous time Markov process. The specific type of Markov process used is due to Jukes and Cantor (1969) .", |
| "cite_spans": [ |
| { |
| "start": 210, |
| "end": 229, |
| "text": "(Dolgopolsky, 1986)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 275, |
| "end": 298, |
| "text": "(List and Forkel, 2021)", |
| "ref_id": null |
| }, |
| { |
| "start": 477, |
| "end": 500, |
| "text": "Jukes and Cantor (1969)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bayesian Phylogenetic Inference", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Let a phylogeny -i.e., a tree with branch lengths -over the languages in question be given. It is assumed that the types of symbols within an alignment colum are the states of a continuous time Markov process. A complete model is one where each node is assigned exactly one state. For the leaf nodes, these are the entries of the MSA column. Let u and l be the states at the top and at the bottom of a branch of the phylogenetic tree, and let t be the length of the branch.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bayesian Phylogenetic Inference", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "The likelihood of this branch is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bayesian Phylogenetic Inference", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "P (l|u) = \uf8f1 \uf8f2 \uf8f3 1 n + n\u22121 n e \u2212rt if u = l 1 n \u2212 1 n e \u2212rt else,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bayesian Phylogenetic Inference", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "where n is the number of distinct symbols occurring in the MSA column. The rate r is a model paramter and is always positive.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bayesian Phylogenetic Inference", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "The total likelihood of an assignment of states to the nodes of the tree is the product of all branch likelihood, times the likelihood of the state at the root. For this I assumed a uniform distribution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bayesian Phylogenetic Inference", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "The marginal likelhood of the states at the leaves, given a phylogeny T and rate r is the sum of the likelihoods of all assignments of states to non-leaf nodes. The likelihood of a complete character matrix, given a phylogeny and an assignment of a rate value for each character (i.e., MSA column), is the product of the likelihoods of the individual characters. When a character state for a language is unknown -either because it is a gap in the MSA, or the language does not have a reflex for the corresponding cognate class -the marginal likelihood is computed as the sum of the likelihoods for all possible character states.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bayesian Phylogenetic Inference", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Given suitable priors for the phylogeny and the rates, the posterior distribution over trees can be estimated via Bayesian inference for the collection of MSAs as data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bayesian Phylogenetic Inference", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "This step was carried out using the software Mr-Bayes (Ronquist and Huelsenbeck, 2003) . Rates were allowed to vary between characters, but are drawn from a discretized Gamma distribution with equal mean and variance. The mean of this hyperprior distribution is drawn from a standard exponential distribution. A uniform prior distribution over tree topologies was assumed, paired with a standard exponential prior distribution over the tree age and a uniform prior distribution over the branch lengths.", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 86, |
| "text": "(Ronquist and Huelsenbeck, 2003)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bayesian Phylogenetic Inference", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "The posterior tree distribution for the running example is visualized in Figure 2 (produced with the software densitree, Bouckaert and Heled 2014). It can be seen that there is considerable uncertainty regarding the position of French and Albanian in the tree, as well as regarding the height of the tree.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 73, |
| "end": 81, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Bayesian Phylogenetic Inference", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "While I used Dolgopolsky sound classes for phylogenetic inference, cognate inference has to operate on IPA characters. For this purpose, I used the posterior tree distribution from the previous step as prior distribution. Data are MSAs of IPA strings. For the running example, this looks as in Table 4 . (Note that the MSA is computed on the basis of ASJP strings, and ASJP symbols are replaced by the corresponding IPA symbols afterwards.)", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 294, |
| "end": 301, |
| "text": "Table 4", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Inferring Mutation Rates", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "As a further deviation from the previous step, gaps (indicated by \"-\") are treated as normal char- acter states, while missing data (indicated by \".\") are marginalized out. For this step I assumed a constant rate over all characters. Inference was performed using the Julia package MCPhylo.jl (Wahle 2021; https: //juliapackages.com/p/mcphylo), leading to a sample from the posterior distribution over rates.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inferring Mutation Rates", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "For cognate prediction, the attested entries of the cognate class in question are aligned using the procedure and the model described in Subsection 3.3. If the test data contain symbols not occurring in the training data, their emission probabilities are set to the minimal emission probability of any symbol from the training data, and emission probabilities are re-normalized in the trained pHMM.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multiple Sequence Alignment of Test Data", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "For the running example, the MSA is shown in Table 5 . The entries for French (shown in boldface)", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 45, |
| "end": 52, |
| "text": "Table 5", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Multiple Sequence Alignment of Test Data", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "Albanian p e S k - are unknown and have to be inferred in the final step.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multiple Sequence Alignment of Test Data", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "English f I S -- French p i S k - German f i S -- Latin p i s k i", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multiple Sequence Alignment of Test Data", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "Missing-value imputation is done column-wise. Using the posterior distribution over trees and rates described in Subsections 3.4 and 3.5, for each slot the posterior probability distribution over the symbols occurring elsewhere in the column was computed. This was practically implemented by separately computing the posterior probabilities for all candidate symbols separately and normalizing them. As prediction, the symbol with the highest posterior probability was chosen. The final cognate prediction is the result of removing all gap symbols -piSk in the example.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cognate Prediction", |
| "sec_num": "3.7" |
| }, |
| { |
| "text": "Let me close with a brief reflection on what kind of information this system extracts from the training set to perform cognate prediction. There are mainly two patterns the system pays attention to. The first is the regularity of sound correspondences which are encapsulated in the emission probabilites of the trained pHMM, especially its M state. The system does not pay attention to the specific languages the words to be aligned come from, so it is unaware of language-specific sound correspondences. Therefore the prediction step does not make use of specific sound laws in any way. Second, the system employs phylogenetic information. This amounts to a weighing of the importance of the cognates from other languages when deciding on the choice of the missing value imputation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Also, since the missing value imputation is performed column-wise for the alignment matrix, no syntagmatic information is being used. It is not checked which candidate predictions are phonotactically or morphologically most similar to the training words from the same languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In future research, it is worth considering to extend the system towards the usage of languagespecific sound correspondences and syntagmatic information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The source code and instructions how to run the system are publicly available at https://github.com/gerhardJaeger/ gerhardSigtyp2022 (also archived on Zenodo under the doi 10.5281/zenodo.6559085). Most of the workflow was implemented in the Julia language (https://julialang.org/), a relatively new language combining the convenient syntax and interactive functionality of languages such as Python with execution speed of optimized code close to C or Java. Essential Julia packages used are Johannes Wahle's MCPhylo.jl (which is based on Brian J. Smith' Mamba.jl package; https://mambajl.readthedocs.io/ en/latest/) for phylogenetic Bayesian inference and my own package SequenceAlignment.jl (https://github.com/gerhardJaeger/ SequenceAlignment.jl, v0.9.1) for sequence alignment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material", |
| "sec_num": null |
| }, |
| { |
| "text": "For conversions between different sound class systems, the Python package LingPy (List and Forkel, 2021) was used. Besides MCPhylo.jl, I used MrBayes (Ronquist and Huelsenbeck, 2003) for phylogenetic inference. Postprocessing of the output of MrBayes was done with the R package ape (Paradis et al., 2004) .", |
| "cite_spans": [ |
| { |
| "start": 150, |
| "end": 182, |
| "text": "(Ronquist and Huelsenbeck, 2003)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 283, |
| "end": 305, |
| "text": "(Paradis et al., 2004)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material", |
| "sec_num": null |
| }, |
| { |
| "text": "In J\u00e4ger (2019), pairwise string alignment was performed using the Needleman-Wunsch algorithm(Needleman and Wunsch, 1970) with parameters trained on the entire ASJP database(Wichmann et al., 2016). Since the rules of the Shared Task precludes the use or external data for parameter training, I opted for a method here were parameters can be estimated from scratch using only the licit training data.2 In J\u00e4ger (2019) phylogenetic inference was performed using cognate data, but since the Shared Task does not make information about the meaning of the words available, this was not possible here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "This inference step amounts to a notational variant of the Needleman-Wunsch algorithm, cf.Needleman and Wunsch (1970).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Here and elsewhere, symbols indicating morpheme boundaries were ignored.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "I am grateful to Johannes Wahle for technical support during the implementation.This research was supported by the DFG Centre for Advanced Studies in the Humanities Words, Bones, Genes, Tools (DFG-KFG 2237) and by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement 834050).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Densitree 2: Seeing trees through the forest", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Remco", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Bouckaert", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Heled", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "file://localhost/home/benjaminaw/grobid-0.6.1/grobid-home/tmp/doi.org/10.1101/012401" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Remco R. Bouckaert and Joseph Heled. 2014. Densitree 2: Seeing trees through the forest. BioRxiv. doi. org/10.1101/012401.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Automated classification of the world's languages: A description of the method and preliminary results", |
| "authors": [ |
| { |
| "first": "Cecil", |
| "middle": [ |
| "H" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [ |
| "W" |
| ], |
| "last": "Holman", |
| "suffix": "" |
| }, |
| { |
| "first": "S\u00f8ren", |
| "middle": [], |
| "last": "Wichmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Viveka", |
| "middle": [], |
| "last": "Velupillai", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "4", |
| "issue": "", |
| "pages": "285--308", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cecil H. Brown, Eric W. Holman, S\u00f8ren Wichmann, and Viveka Velupillai. 2008. Automated classification of the world's languages: A description of the method and preliminary results. STUF -Language Typology and Universals, 4:285-308.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A probabilistic hypothesis concerning the oldest relationships among the language families of Northern Eurasia", |
| "authors": [ |
| { |
| "first": "Aaron", |
| "middle": [ |
| "B" |
| ], |
| "last": "Dolgopolsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Typology, Relationship and Time: A collection of papers on language change and relationship by Soviet linguists", |
| "volume": "", |
| "issue": "", |
| "pages": "27--50", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aaron B. Dolgopolsky. 1986. A probabilistic hypothe- sis concerning the oldest relationships among the language families of Northern Eurasia. In V. V. Shevoroshkin, editor, Typology, Relationship and Time: A collection of papers on language change and relationship by Soviet linguists, pages 27-50. Karoma Publisher, Ann Arbor.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Biological Sequence Analysis", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Durbin", |
| "suffix": "" |
| }, |
| { |
| "first": "Sean", |
| "middle": [ |
| "R" |
| ], |
| "last": "Eddy", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "Krogh", |
| "suffix": "" |
| }, |
| { |
| "first": "Graeme", |
| "middle": [], |
| "last": "Mitchison", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Durbin, Sean R. Eddy, Anders Krogh, and Graeme Mitchison. 1989. Biological Sequence Anal- ysis. Cambridge University Press, Cambridge, UK.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Progressive sequence alignment as a prerequisitetto correct phylogenetic trees", |
| "authors": [ |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Da-", |
| "suffix": "" |
| }, |
| { |
| "first": "Russell", |
| "middle": [ |
| "F" |
| ], |
| "last": "Feng", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Doolittle", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Journal of Molecular Evolution", |
| "volume": "25", |
| "issue": "4", |
| "pages": "351--360", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Da-Fei Feng and Russell F. Doolittle. 1987. Progres- sive sequence alignment as a prerequisitetto correct phylogenetic trees. Journal of Molecular Evolution, 25(4):351-360.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Computational historical linguistics", |
| "authors": [ |
| { |
| "first": "Gerhard", |
| "middle": [], |
| "last": "J\u00e4ger", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Theoretical Linguistics", |
| "volume": "45", |
| "issue": "3-4", |
| "pages": "151--182", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gerhard J\u00e4ger. 2019. Computational historical linguis- tics. Theoretical Linguistics, 45(3-4):151-182.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Evolution of protein molecules", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [ |
| "R" |
| ], |
| "last": "Jukes", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Cantor", |
| "suffix": "" |
| } |
| ], |
| "year": 1969, |
| "venue": "Mammalian protein metabolisum", |
| "volume": "", |
| "issue": "", |
| "pages": "21--132", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas H. Jukes and Charles R. Cantor. 1969. Evo- lution of protein molecules. In H. N. Munro, edi- tor, Mammalian protein metabolisum, pages 21-132. Academic Press, New York and London.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "The significance of word lists", |
| "authors": [ |
| { |
| "first": "Brett", |
| "middle": [], |
| "last": "Kessler", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brett Kessler. 2001. The significance of word lists. CSLI Publications, Stanford.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A Python library for historical linguistics. version 2.6.9", |
| "authors": [ |
| { |
| "first": "Tresoldi", |
| "middle": [], |
| "last": "Simon", |
| "suffix": "" |
| }, |
| { |
| "first": "Christoph", |
| "middle": [], |
| "last": "Tiago", |
| "suffix": "" |
| }, |
| { |
| "first": "Gereon", |
| "middle": [], |
| "last": "Rzymski", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Kaiping", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Moran", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Johannes Dellert", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johann-Mattis List and Robert Forkel. 2021. Lingpy. A Python library for historical linguistics. version 2.6.9. URL: https://lingpy.org, DOI: https://zenodo.org/badge/latestdoi/5137/lingpy/lingpy. With contributions by Greenhill, Simon, Tresoldi, Tiago, Christoph Rzymski, Gereon Kaiping, Steven Moran, Peter Bouda, Johannes Dellert, Taraka Rama, Frank Nagel. Leizpig: Max Planck Institute for Evolutionary Anthropology.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The SIG-TYP 2022 shared task on the prediction of cognate reflexes", |
| "authors": [ |
| { |
| "first": "Johann-Mattis", |
| "middle": [], |
| "last": "List", |
| "suffix": "" |
| }, |
| { |
| "first": "Ekaterina", |
| "middle": [], |
| "last": "Vylomova", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Forkel", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [ |
| "W" |
| ], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "" |
| } |
| ], |
| "year": 2022, |
| "venue": "The Fourth Workshop on Computational Typology and Multilingual NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johann-Mattis List, Ekaterina Vylomova, Robert Forkel, Nathan W. Hill, and Ryan Cotterell. 2022. The SIG- TYP 2022 shared task on the prediction of cognate reflexes. In The Fourth Workshop on Computational Typology and Multilingual NLP, Online. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "A general method applicable to the search for similarities in the amino acid sequence of two proteins", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Saul", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [ |
| "D" |
| ], |
| "last": "Needleman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Wunsch", |
| "suffix": "" |
| } |
| ], |
| "year": 1970, |
| "venue": "Journal of Molecular Biology", |
| "volume": "48", |
| "issue": "", |
| "pages": "443--453", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Saul B. Needleman and Christian D. Wunsch. 1970. A general method applicable to the search for simi- larities in the amino acid sequence of two proteins. Journal of Molecular Biology, 48:443-453.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "T-Coffee: A novel method for fast and accurate multiple sequence alignment", |
| "authors": [ |
| { |
| "first": "C\u00e9dric", |
| "middle": [], |
| "last": "Notredame", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Desmond", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaap", |
| "middle": [], |
| "last": "Higgins", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Heringa", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Journal of Molecular Biology", |
| "volume": "302", |
| "issue": "1", |
| "pages": "205--217", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C\u00e9dric Notredame, Desmond G Higgins, and Jaap Heringa. 2000. T-Coffee: A novel method for fast and accurate multiple sequence alignment. Journal of Molecular Biology, 302(1):205-217.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "APE: analyses of phylogenetics and evolution in R language", |
| "authors": [ |
| { |
| "first": "Emmanuel", |
| "middle": [], |
| "last": "Paradis", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Claude", |
| "suffix": "" |
| }, |
| { |
| "first": "Korbinian", |
| "middle": [], |
| "last": "Strimmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Bioinformatics", |
| "volume": "20", |
| "issue": "2", |
| "pages": "289--290", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emmanuel Paradis, Julien Claude, and Korbinian Strim- mer. 2004. APE: analyses of phylogenetics and evo- lution in R language. Bioinformatics, 20(2):289-290.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Mr-Bayes 3: Bayesian phylogenetic inference under mixed models", |
| "authors": [ |
| { |
| "first": "Frederik", |
| "middle": [], |
| "last": "Ronquist", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "P" |
| ], |
| "last": "Huelsenbeck", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Bioinformatics", |
| "volume": "19", |
| "issue": "12", |
| "pages": "1572--1574", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frederik Ronquist and John P. Huelsenbeck. 2003. Mr- Bayes 3: Bayesian phylogenetic inference under mixed models. Bioinformatics, 19(12):1572-1574.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "A statistical method for evaluating systematic relationships", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Robert", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [ |
| "D" |
| ], |
| "last": "Sokal", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Michener", |
| "suffix": "" |
| } |
| ], |
| "year": 1958, |
| "venue": "", |
| "volume": "38", |
| "issue": "", |
| "pages": "1409--1438", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert R. Sokal and Charles D. Michener. 1958. A statistical method for evaluating systematic rela- tionships. University of Kansas Science Bulletin, 38:1409-1438.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "No-U-Turn sampling for phylogenetic trees", |
| "authors": [ |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Wahle", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "file://localhost/home/benjaminaw/grobid-0.6.1/grobid-home/tmp/doi.org/10.1101/2021.03.16.435623" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johannes Wahle. 2021. No-U-Turn sampling for phy- logenetic trees. bioRxiv. doi.org/10.1101/ 2021.03.16.435623.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "The ASJP database", |
| "authors": [ |
| { |
| "first": "S\u00f8ren", |
| "middle": [], |
| "last": "Wichmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [ |
| "W" |
| ], |
| "last": "Holman", |
| "suffix": "" |
| }, |
| { |
| "first": "Cecil", |
| "middle": [ |
| "H" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S\u00f8ren Wichmann, Eric W. Holman, and Cecil H. Brown. 2016. The ASJP database (version 17). http://asjp.clld.org/.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "Infer the posterior distribution of the mutation rate of symbols within the columns of the MSAs.", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "text": "Pair Hidden Markov Model", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "text": "Figure 2: Posterior tree distribution Albanian . . . . . . English h A r t --French k oe r ---German h e r ts @ n Latin k o r d --", |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF0": { |
| "num": null, |
| "text": "", |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td colspan=\"3\">: Example training data</td><td/></tr><tr><td colspan=\"6\">COGID Albanian English French German Latin</td></tr><tr><td>353-3</td><td>p e S k</td><td>f I S</td><td>?</td><td>f i S</td><td>p i s k i</td></tr></table>", |
| "html": null |
| }, |
| "TABREF1": { |
| "num": null, |
| "text": "Example test data", |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF2": { |
| "num": null, |
| "text": "", |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF3": { |
| "num": null, |
| "text": "Example MSA with IPA characters", |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF4": { |
| "num": null, |
| "text": "MSA for cognate prediction", |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null |
| } |
| } |
| } |
| } |