ACL-OCL / Base_JSON /prefixK /json /K18 /K18-1027.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K18-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:09:43.726083Z"
},
"title": "Similarity dependent Chinese Restaurant Process for Cognate Identification in Multilingual Wordlists",
"authors": [
{
"first": "Taraka",
"middle": [],
"last": "Rama",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oslo",
"location": {
"country": "Norway"
}
},
"email": "tarakark@ifi.uio.no"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present and evaluate two similarity dependent Chinese Restaurant Process (sd-CRP) algorithms at the task of automated cognate detection. The sd-CRP clustering algorithms do not require any predefined threshold for detecting cognate sets in a multilingual word list. We evaluate the performance of the algorithms on six language families (more than 750 languages) and find that both the sd-CRP variants performs as well as InfoMap and better than UPGMA at the task of inferring cognate clusters. The algorithms presented in this paper are family agnostic and can be applied to any linguistically under-studied language family.",
"pdf_parse": {
"paper_id": "K18-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "We present and evaluate two similarity dependent Chinese Restaurant Process (sd-CRP) algorithms at the task of automated cognate detection. The sd-CRP clustering algorithms do not require any predefined threshold for detecting cognate sets in a multilingual word list. We evaluate the performance of the algorithms on six language families (more than 750 languages) and find that both the sd-CRP variants performs as well as InfoMap and better than UPGMA at the task of inferring cognate clusters. The algorithms presented in this paper are family agnostic and can be applied to any linguistically under-studied language family.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Cognates are related words across languages that have descended from a common ancestral language. Identification of cognates is an important step in historical linguistics while establishing genetic relations between languages that are hypothesized to have descended from a single language that existed in the past. For instance, English hound and German Hund \"dog\" are cognates that go back to the Proto-Germanic stage. Cognate identification requires great amount of scholarly effort and is available for some language families such as Indo-European, Dravidian, Austronesian, and Uralic which have a long tradition of comparative linguistic research that involves decades (Dravidian family) to centuries (Indo-European family) of scholarly effort. Automatic detection of cognates with high accuracy is very much desired for reducing the effort required in analyzing understudied language families of the world.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Typically, expert annotated cognate sets are employed to infer phylogenetic trees showing language relationships that can be used to test hypotheses about temporal and spatial evolution of language families (Bouckaert et al., 2012; Chang et al., 2015) , linguistic reconstruction of ancestral states on a tree , or lexical reconstruction (Bouchard-C\u00f4t\u00e9 et al., 2013) . Rama et al. (2018) showed that cognates inferred from automated methods of cognate detection can be used to infer high quality phylogenetic trees. The authors noted that there is a need for more research towards developing highly accurate cognate identification methods that can be applied to the data of not so well-studied language families which will be of assistance to historical linguists to automate parts if not the whole of the comparative method.",
"cite_spans": [
{
"start": 207,
"end": 231,
"text": "(Bouckaert et al., 2012;",
"ref_id": "BIBREF4"
},
{
"start": 232,
"end": 251,
"text": "Chang et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 338,
"end": 366,
"text": "(Bouchard-C\u00f4t\u00e9 et al., 2013)",
"ref_id": "BIBREF3"
},
{
"start": 369,
"end": 387,
"text": "Rama et al. (2018)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The last decades have seen a large amount of computational effort towards automatizing the process of cognate identification since the work of Covington (1996) and Kondrak (2002) . The computational effort involved devising new sequence alignment algorithms (Kondrak, 2005 (Kondrak, , 2009 , novel sound transition matrices which are linguistically guided (Kondrak, 2001; List, 2012b) or data-driven (J\u00e4ger, 2013; Rama et al., 2013 Rama et al., , 2017 List, 2012a) , and machine learning approaches (Hauer and Kondrak, 2011; Rama, 2015 Rama, , 2016 to identify cognates within multilingual word lists (see table 1 ; Swadesh, 1952) belonging to different language families and dictionaries (St Arnaud et al., 2017) .",
"cite_spans": [
{
"start": 143,
"end": 159,
"text": "Covington (1996)",
"ref_id": "BIBREF8"
},
{
"start": 164,
"end": 178,
"text": "Kondrak (2002)",
"ref_id": "BIBREF20"
},
{
"start": 258,
"end": 272,
"text": "(Kondrak, 2005",
"ref_id": "BIBREF21"
},
{
"start": 273,
"end": 289,
"text": "(Kondrak, , 2009",
"ref_id": "BIBREF22"
},
{
"start": 356,
"end": 371,
"text": "(Kondrak, 2001;",
"ref_id": "BIBREF19"
},
{
"start": 372,
"end": 384,
"text": "List, 2012b)",
"ref_id": "BIBREF26"
},
{
"start": 400,
"end": 413,
"text": "(J\u00e4ger, 2013;",
"ref_id": "BIBREF16"
},
{
"start": 414,
"end": 431,
"text": "Rama et al., 2013",
"ref_id": "BIBREF35"
},
{
"start": 432,
"end": 451,
"text": "Rama et al., , 2017",
"ref_id": "BIBREF37"
},
{
"start": 452,
"end": 464,
"text": "List, 2012a)",
"ref_id": "BIBREF25"
},
{
"start": 499,
"end": 524,
"text": "(Hauer and Kondrak, 2011;",
"ref_id": "BIBREF14"
},
{
"start": 525,
"end": 535,
"text": "Rama, 2015",
"ref_id": "BIBREF33"
},
{
"start": 536,
"end": 548,
"text": "Rama, , 2016",
"ref_id": "BIBREF34"
},
{
"start": 616,
"end": 630,
"text": "Swadesh, 1952)",
"ref_id": "BIBREF46"
},
{
"start": 689,
"end": 713,
"text": "(St Arnaud et al., 2017)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [
{
"start": 601,
"end": 613,
"text": "(see table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the above cognate identification methods involve a workflow consisting of computation of distances between all the word pairs that have the same meaning using a machine learning algorithm or a sequence alignment algorithm; and, then clustering the pairwise distance matrix using a clustering algorithm such as InfoMap (Rosvall and Bergstrom, 2008) or UPGMA (Unweighted Pair Group Method with Arithmetic Mean; Sokal and Michener, 1958) .",
"cite_spans": [
{
"start": 326,
"end": 355,
"text": "(Rosvall and Bergstrom, 2008)",
"ref_id": "BIBREF39"
},
{
"start": 365,
"end": 416,
"text": "(Unweighted Pair Group Method with Arithmetic Mean;",
"ref_id": null
},
{
"start": 417,
"end": 442,
"text": "Sokal and Michener, 1958)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Both InfoMap and UPGMA require a predefined threshold that is either set heuristically or through tuned to obtain to obtain optimal perfor-Language ALL AND . . . mance at identifying cognate clusters on a heldout expert annotated cognate dataset(s). The clustering threshold is a single number that is tuned for all the meanings and not separately for each of the meanings. A single global threshold can lead to poor performance since the number of cognate sets vary a lot across meanings for different language families. For instance, the Indo-European dataset has cognate cluster sizes ranging from 37 for meaning because to 1 for meaning name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the other hand, a non-parametric clustering method such as Chinese Restaurant Process (CRP; Gershman and Blei 2012) can form clusters directly from the data without the need for tuning the threshold. CRP has found application in different NLP tasks such as morphological segmentation (Goldwater et al., 2006) , language modeling (Goldwater et al., 2011) , machine translation (Ravi and Knight, 2011) , part-of-speech induction (Blunsom and Cohn, 2011; Sirts et al., 2014) , and language decipherment (Snyder et al., 2010) .",
"cite_spans": [
{
"start": 287,
"end": 311,
"text": "(Goldwater et al., 2006)",
"ref_id": "BIBREF10"
},
{
"start": 332,
"end": 356,
"text": "(Goldwater et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 379,
"end": 402,
"text": "(Ravi and Knight, 2011)",
"ref_id": "BIBREF38"
},
{
"start": 430,
"end": 454,
"text": "(Blunsom and Cohn, 2011;",
"ref_id": "BIBREF2"
},
{
"start": 455,
"end": 474,
"text": "Sirts et al., 2014)",
"ref_id": "BIBREF41"
},
{
"start": 503,
"end": 524,
"text": "(Snyder et al., 2010)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present two clustering algorithms inspired from similarity dependent Chinese Restaurant Process for the purpose of inferring cognate clusters. Our CRP based clustering algorithms take a word pair similarity matrix as input and infer cognate clusters automatically without needing any threshold. The sd-CRP algorithms have a hyperparameter \u03b1 that allows us to form new clusters. We compare the performance of the CRP algorithms on six different language families and find that the CRP algorithms better than UP-GMA and yields better or competing performance against InfoMap. We sample \u03b1 so that the algorithms are robust to the initial value of \u03b1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows. We describe related work in section 2. In section 3, we describe the word similarity features used to train the SVM model. We describe sd-CRP, UPGMA, and InfoMap algorithms in section 4. We describe the evaluation metrics and datasets in section 5. We present the results of our experiments in section 6. We discuss the results by analyzing the effect of features on SVM model, initial \u03b1 values, and missing data on the performance of clustering in section 7. Finally, we conclude and present directions for future work in section 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the automated cognate identification work mentioned in the previous section employed either UPGMA or InfoMap algorithms. Hauer and Kondrak (2011) were the first to apply UPGMA clustering algorithm to infer cognate sets from Swadesh lists. The authors trained a SVM classifier based on string similarity features to calculate word distances between all word pairs for a meaning. The pair-wise distance matrix is supplied to UPGMA with a predefined threshold for inferring word clusters. The UPGMA algorithm is simple and yields reasonable results across various language families (List, 2012a) . However, UPGMA clustering algorithm is dependent on the threshold that needs to be tuned to obtain optimal performance (List et al., 2017b) . The cognate identification work of Hall and Klein (2011) and Bouchard-C\u00f4t\u00e9 et al. (2013) requires the phylogenetic tree of the language family to be known beforehand which is an unrealistic assumption for large number of world's language families. In another work, List et al. (2016) employ a weighted variant of Levenshtein distance known as SCA (see section 3) for calculating similarity between two words. Then, they apply a community detection algorithm known as InfoMap for the purpose of discovering partial cognate sets in multiple groups of Sino-Tibetan language family. The authors find that the InfoMap algorithm works better than UPGMA when tuned for threshold. In this paper, we compare the CRP clustering algorithms against InfoMap and the similarity variant of UPGMA algorithm described in section 4.3.",
"cite_spans": [
{
"start": 129,
"end": 153,
"text": "Hauer and Kondrak (2011)",
"ref_id": "BIBREF14"
},
{
"start": 587,
"end": 600,
"text": "(List, 2012a)",
"ref_id": "BIBREF25"
},
{
"start": 722,
"end": 742,
"text": "(List et al., 2017b)",
"ref_id": "BIBREF29"
},
{
"start": 780,
"end": 801,
"text": "Hall and Klein (2011)",
"ref_id": "BIBREF13"
},
{
"start": 806,
"end": 833,
"text": "Bouchard-C\u00f4t\u00e9 et al. (2013)",
"ref_id": "BIBREF3"
},
{
"start": 1010,
"end": 1028,
"text": "List et al. (2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In this section, we present the word similarity features used to train our SVM model at the binary task of classifying if a word pair is cognate or noncognate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word similarity model",
"sec_num": "3"
},
{
"text": "We use length normalized edit distance, number of common bigrams, common prefix length, individual word lengths, and absolute difference between the word lengths as features for training a SVM classifier (Hauer and Kondrak, 2011) . We refer to this feature set as HK.",
"cite_spans": [
{
"start": 204,
"end": 229,
"text": "(Hauer and Kondrak, 2011)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "String similarity features",
"sec_num": null
},
{
"text": "Point-wise Mutual Information (PMI) We include PMI weighted Needleman-Wunsch (Needleman and Wunsch, 1970) word similarity score (J\u00e4ger, 2013) as an additional feature for training the SVM classifier. The (unweighted or vanilla) Needleman-Wunsch algorithm is the similarity counterpart of the Levenshtein distance. The vanilla Needleman-Wunsch algorithm assigns equal negative weight to a common sound correspondence such as /s/ \u223c /h/ and a highly improbable sound correspondence such as /p/ \u223c /r/. The PMI weighted sound pair matrix inferred in J\u00e4ger (2013) assigns a positive weight to common sound correspondences and a negative weight to the latter ones. The PMI weight for two sounds i and j is defined as log",
"cite_spans": [
{
"start": 128,
"end": 141,
"text": "(J\u00e4ger, 2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "String similarity features",
"sec_num": null
},
{
"text": "p(i,j) q(i)\u2022q(j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "String similarity features",
"sec_num": null
},
{
"text": "where, p(i, j) is the relative frequency of i, j occurring at the same position in the aligned word pairs and q(.) is the relative frequency of a sound in the whole word list. The similarity score for a word pair is computed using PMI-weighted Needleman-Wunsch algorithm. We transform the word similarity score using sigmoid function to yield a score between 0 and 1.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "String similarity features",
"sec_num": null
},
{
"text": "SCA We experimented with SCA (Sound Class Based Phonetic Alignment) word distance score (List et al., 2016) as an additional feature in our SVM model and found that inclusion of this feature improves the performance of cognate clustering systems. The SCA distance score is computed using the LingPy library (List et al., 2017a ).",
"cite_spans": [
{
"start": 88,
"end": 107,
"text": "(List et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 307,
"end": 326,
"text": "(List et al., 2017a",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "String similarity features",
"sec_num": null
},
{
"text": "All the above features are widely used in cognate identification papers cited in sections 1 and 2. All the string similarity features are computed on words represented in ASJP code consisting of symbols on standard QWERTY keyboard. The ASJP code consists of 41 symbols that is used to represent common sounds of the world's languages. As such it collapses some distinctions between similar sounds such as using a single 'r' symbol for all the rhotic sounds. In this paper, we used LingPy library to convert IPA symbols to ASJP symbols. Our SVM model is implemented using scikit-learn (Buitinck et al., 2013) . The trained SVM model is then used to predict the confidence scores for all the word pairs having the same meaning.",
"cite_spans": [
{
"start": 584,
"end": 607,
"text": "(Buitinck et al., 2013)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "String similarity features",
"sec_num": null
},
{
"text": "In this section, we motivate and describe the two sd-CRP algorithms followed by InfoMap and UP-GMA clustering algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering algorithms",
"sec_num": "4"
},
{
"text": "In the traditional CRP, the probability that a new customer i sits at a table already filled with customers is proportional to the number of customers sitting at the table. The probability that the new customer sits at a new table is proportional to \u03b1. Blei and Frazier (2011) extended the traditional CRP model to a distance-dependent CRP model (dd-CRP) where customer i sits with a different customer j with a probability proportional to f (d ij ) where f is a decay function and d ij is the distance between customers i and j. The new customer can sit by itself with a probability proportional to \u03b1. The dd-CRP formulation forms clusters through connections between the customers. This property to form clusters depending on the data is directly relevant for inferring cognate clusters from a word pair distance matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation for CRP",
"sec_num": "4.1"
},
{
"text": "In a later paper, Socher et al. (2011) introduced a similarity dependent CRP (sd-CRP) algorithm that can handle arbitrary similarities between two customers. Socher et al. (2011) showed that their sd-CRP variant performs better than dd-CRP when clustering MNIST digits dataset and Newsgroup articles. A customer is a word in the context of cognate identification. We describe the two variants of sd-CRP -ns-CRP and sb-CRP -that work directly with a similarity matrix S in the next section.",
"cite_spans": [
{
"start": 158,
"end": 178,
"text": "Socher et al. (2011)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation for CRP",
"sec_num": "4.1"
},
{
"text": "Given a word similarity matrix S \u2208 R N \u00d7N and \u03b1, the CRP algorithm clusters N elements into K clusters where 1 <= K <= N .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "sd-CRP algorithms",
"sec_num": "4.2"
},
{
"text": "The algorithm starts by placing each word into its own cluster. At each step, the algorithm assign a word w i to the cluster C that has the highest net similarity with w i which gives the name to the algorithm. We define net similarity as Algorithm 1 ns-CRP Input: S, \u03b1 Ouput: Cluster assignments 1. Initialize each word into its own cluster and set \u03b1 to 0.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ns-CRP",
"sec_num": "4.2.1"
},
{
"text": "\u2022 For each word wi -Remove wi from its cluster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat until convergence:",
"sec_num": "2."
},
{
"text": "-Compute the net similarity s ik between wi to all words in a cluster k. \u2022 Sample \u03b1 using a Metropolis-Hastings step",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat until convergence:",
"sec_num": "2."
},
{
"text": "|C| j=1 S(w i , w j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat until convergence:",
"sec_num": "2."
},
{
"text": "We call the algorithm ns-CRP after the net similarity criterion used to perform cluster assignments. w i is assigned to a new cluster if \u03b1S(w i , w i ) is greater than any of the similarities with the existing clusters. Any empty clusters remaining at the end of an iteration are removed. The cluster inference procedure is summarized in Algorithm 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat until convergence:",
"sec_num": "2."
},
{
"text": "Algorithm 2 sb-CRP Input: S, \u03b1 Ouput: Cluster assignments 1. Initialize each word to its own cluster and set \u03b1 to 0.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat until convergence:",
"sec_num": "2."
},
{
"text": "\u2022 For each word wi -Remove the outgoing link from wi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat until convergence:",
"sec_num": "2."
},
{
"text": "-Compute the net similarity s ik between wi and the words in the set returned by SitBehind(w k ). that w i is in its own cluster. The probability of forming a directed link from w i and w j is proportional to the sum of the similarity between w i and all the words in the set returned by SitBehind(w j ). The weight for linking w i to itself is computed as \u03b1S(w i , w i ). The sb-CRP is summarized in Algorithm 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat until convergence:",
"sec_num": "2."
},
{
"text": "We present the result of application of sb-CRP algorithm to meaning fish in figure 1. The algorithm places the words correctly in their own clusters. The algorithm forms singleton clusters by forming self-loops. For instance, the algorithm links Ancient Greek ikhthis to itself thus, placing the word in its own cluster. When two words belonging to Bihari and Oriya are highly similar maTh \u223c maTho then, the algorithm links both the words to each other forming a cycle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat until convergence:",
"sec_num": "2."
},
{
"text": "Given K clusters out of which n are nonsingleton, algorithm 1 maximizes the following objective where k is the cluster index.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Underlying objective",
"sec_num": "4.2.3"
},
{
"text": "n k=1 (i,j)\u2208k S(w i , w j ) \u2212 K k=n+1,i\u2208k \u03b1S(w i , w i ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Underlying objective",
"sec_num": "4.2.3"
},
{
"text": "In the initial step, the objective in equation 1 is \u2212\u03b1 i S(w i , w i ) which increases until there is no change in the cluster reassignments. The objective for algorithm 2 is similar to equation 1 and only differs in the positive part due to SitBehind function. We use the above objective to sample \u03b1 which is explained below. We observe that the objective function given in equation 1 is similar to the CRP extension to K-Means (DP-Means) proposed by Kulis and Jordan (2011) who show that the DP-means algorithm converges to a local optimum.",
"cite_spans": [
{
"start": 452,
"end": 475,
"text": "Kulis and Jordan (2011)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Underlying objective",
"sec_num": "4.2.3"
},
{
"text": "We sample \u03b1 using a Metropolis-Hastings step. We will assume an exponential prior for \u03b1 with rate parameter 10. We assume an exponential prior since \u03b1 should be greater than zero and the support for the exponential distribution is R + . \u03b1 is sampled through a Metropolis-Hastings step at the end of each iteration. We use an asymmetric multiplier proposal q(\u03b1 * |\u03b1) = \u03b1 \u2022 e \u03b5(u\u22120.5) where u(\u2208 [0, 1]) is a uniform random number to propose a new \u03b1 * . The Hastings ratio for a multiplier proposal is \u03b5(u \u2212 0.5) where \u03b5 (= 1) is the tuning parameter that controls the range of proposed \u03b1 * (Lakner et al., 2008 ). Since we sample \u03b1 on fixed cluster assignments, the likelihood ratio is equal to \u03b1 * \u03b1 . The prior ratio is equal to exp(\u03b1 * ) exp(\u03b1) . In this paper, we run both the sd-CRP algorithms by setting the initial value of \u03b1 to 0.1 and running the algorithms for 100 iterations. We found that the algorithm converges within the first ten iterations (see section 7.4). The algorithms take less than three hours to run for the Austronesian language family. We report the final iteration's B-cubed F-scores and ARI scores (see section 5.2) for each dataset.",
"cite_spans": [
{
"start": 588,
"end": 608,
"text": "(Lakner et al., 2008",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling \u03b1",
"sec_num": "4.2.4"
},
{
"text": "UPGMA The variant of St Arnaud et al. (2017) applied a ReLU transformation (max(0, s)) to the pairwise similarity matrix S such that the matrix consists only of positive similarity scores. In the initial step, each word is placed in its own cluster. The mutual score between two clusters is computed as the average of the similarity scores between all the word pairs. In each step, the algorithm merges two clusters with the highest pairwise score. The merging process is only stopped when no two clusters have positive average similarity score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Clustering algorithms",
"sec_num": "4.3"
},
{
"text": "InfoMap is an information-theoretic based clustering algorithm that uses random walks to detect clusters in a network (Rosvall and Bergstrom, 2008) . We transform the similarity matrix into a distance matrix by applying a sigmoid transformation then subtracting the matrix values from 1.0. Then, we apply a pre-defined threshold to form a disconnected graph. Finally, we supply the disconnected graph as input to the InfoMap algorithm to infer clusters. We also experimented with the threshold during cross-validation experiments on the training dataset and found that a threshold of 0.57 yielded slightly higher performance than a threshold of 0.5.",
"cite_spans": [
{
"start": 118,
"end": 147,
"text": "(Rosvall and Bergstrom, 2008)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other Clustering algorithms",
"sec_num": "4.3"
},
{
"text": "In this section, we describe the datasets and cluster evaluation metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Materials and Evaluation",
"sec_num": "5"
},
{
"text": "Training dataset Wichmann and Holman (2013) and List (2014) compiled cognacy annotated multilingual word lists for subsets of families from various scholarly sources such as comparative handbooks and historical linguistics' articles. The detailed references to all the datasets are given in . Below, we provide the number of languages/number of meanings in each language group in parantheses.",
"cite_spans": [
{
"start": 17,
"end": 43,
"text": "Wichmann and Holman (2013)",
"ref_id": "BIBREF47"
},
{
"start": 48,
"end": 59,
"text": "List (2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "\u2022 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "We use B-cubed F-score (Amig\u00f3 et al., 2009) and Adjusted Rand Index (Hubert and Arabie, 1985) to evaluate the quality of the inferred clusters.",
"cite_spans": [
{
"start": 23,
"end": 43,
"text": "(Amig\u00f3 et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 68,
"end": 93,
"text": "(Hubert and Arabie, 1985)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.2"
},
{
"text": "B-cubed F-scores are defined for each individual item (word) as follows. The precision for an item is defined as the ratio between the number of cognates in its cluster to the total number of items in its cluster. The recall for an item is defined as the ratio between the number of cognates in its cluster to the total number of expert labeled cognates. Finally, the B-cubed F-score for a meaning is computed as the harmonic mean of the items' average precision and recall. The B-cubed F-score for the whole dataset is computed as the average of the B-cubed F-scores across all the meanings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.2"
},
{
"text": "Adjusted Rand Index (ARI) is a chance corrected version of rand index (Hubert and Arabie, 1985) . The ARI scores are in the range of \u22121 to +1. A score of 0 indicates that the obtained clusters are randomly labelled whereas a score +1 indicates perfect match between the two clusters. The ARI score is zero whenever the gold standard groups all the words belonging to the same meaning slot (e.g. words for meaning name are cognate across the daughter Indo-European languages) as one cluster, whereas the B-cubed F-score is not zero in such a case.",
"cite_spans": [
{
"start": 70,
"end": 95,
"text": "(Hubert and Arabie, 1985)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.2"
},
{
"text": "We visualize the B-cubed F-scores and ARI scores in figure 2. The spread of the F-scores and ARI scores suggest that InfoMap and sd-CRP variants are better than UPGMA in the case of all the datasets except for the Central Asian dataset. The box plots for InfoMap are similar to the box plots of sd-CRP variants across all the language fami-lies. InfoMap and sd-CRP variants have shorter width boxes than those of UPGMA across all the families. All the algorithms show the lowest performance in terms of both F-scores and ARI scores on the Austro-Asiatic dataset. Based on mean Fscores and ARI scores across all the four language families, we determine the ns-CRP algorithm to be the winner. Table 3 : Pearson's R between number of predicted clusters and number of clusters in the gold standard data. The best correlation for each language family is shaded in light gray.",
"cite_spans": [],
"ref_spans": [
{
"start": 691,
"end": 698,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "F-scores and ARI",
"sec_num": "6.1"
},
{
"text": "Apart from evaluating the cluster quality using B-cubed F-scores and ARI scores, we compare the number of inferred clusters by each algorithm against the number of clusters given in the gold standard data using Pearson's R. We present the results of Pearson's correlation in table 3. The Pearson's correlation between the number of predicted clusters and the number of gold clusters shows that the sd-CRP variants are successful at retrieving the right number of clusters when compared to UPGMA. InfoMap comes close to both sd-CRP variants' performance only in the case of the Central Asian languages dataset. The ns-CRP algorithm is the winner at being the best predictor of cluster sizes since it predicts clusters of sizes close to those given in the gold standard in the case of Austro-Asiatic and Austronesian datasets and shows same performance as sb-CRP in the case of the Central Asian dialects dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Size of inferred clusters",
"sec_num": "6.2"
},
{
"text": "In this section, we discuss the effect of feature selection and initial value of \u03b1 on the performance of sd-CRP algorithms. We verify the effect of missing data on all the clustering algorithms and present the results. Finally, we analyze the working of sd-CRP algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "To ascertain which word similarity features contribute the most to the performance of the ns-CRP algorithm, we trained three simpler SVM models and evaluated the quality of the inferred clusters using these models. The first model HK uses only orthographic features. The second model uses the PMI word similarity as an additional feature to the HK model. The third model uses SCA word similarity as an additional feature to the HK model. The results presented in previous section showed that ns-CRP performs the worst on Austronesian and Austro-Asiatic datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature ablation",
"sec_num": "7.1"
},
{
"text": "Therefore, we present the cluster evaluation results only for these two datasets in table 4. The HK model yields high F-scores for both the datasets. Addition of PMI or SCA as an additional feature always improves both F-scores and ARI scores. In fact, including both PMI and SCA as features yields the best results even if the improvement is marginal in the case of the Austro-Asiatic dataset. We note that we observe similar trends for the rest of the datasets. We do not present the results for other datasets due to space constraints. Finally, the ablation experiments suggest that including both data-driven PMI and linguistically guided SCA as features gives the best results at cognate clustering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature ablation",
"sec_num": "7.1"
},
{
"text": "In this subsection, we investigate the effect of missing data on the clustering algorithms. In the case of the Austronesian dataset, less than 50% of the languages have word forms attested in 70% of the meanings. The situation is slightly better in the case of Austro-Asiatic with more than 80% of the languages having meanings attested in 70% of the meanings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of lexical coverage",
"sec_num": "7.2"
},
{
"text": "In a separate paper, Rama et al. (2018) presented pruned datasets for five different language families -Pama-Nyungan and Sino-Tibetan in addition to Austronesian, Austro-Asiatic, and Indo-European -consisting of only those languages that show the highest mutual lexical coverage. For each dataset, the authors pruned any language which has less than 75% mutual attestations with the rest of the languages. We attempted to prune the Central Asian dataset but found that we could only exclude a single dialect which has less than 50% attestation. Therefore, we did not include the Central Asian dataset in our experiments. The statistics of the pruned datasets is given in table 5.",
"cite_spans": [
{
"start": 21,
"end": 39,
"text": "Rama et al. (2018)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of lexical coverage",
"sec_num": "7.2"
},
{
"text": "Meanings Languages The dataset shows the number of meanings and languages in the pruned datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Family",
"sec_num": null
},
{
"text": "The results of this experiment are visualized in figure 3. The sd-CRP algorithms perform better than UPGMA and InfoMap in the case of Pama-Nyungan and Austro-Asiatic datasets. There seems to be no difference in the performance of all the algorithms in the case of the Sino-Tibetan dataset. There is no difference between sd-CRP and InfoMap algorithms in the case of the Austronesian dataset. Although the mean B-cubed Fscores indicate that there is no difference between the algorithms in the case of the Indo-European dataset, the spread of the box plots suggests that non-UPGMA algorithms perform better than UP-GMA. The B-cubed F-scores are not decisive in the case of the Indo-European dataset, whereas the ARI score clearly shows that non-UPGMA perform better than UPGMA. In conclusion, both the sd-CRP algorithms perform at least as good or better than InfoMap algorithm in the case of pruned datasets. In this experiment, we test the sensitivity of ns-CRP algorithm to the initial \u03b1 by initializing \u03b1 to 0.001, 0.01, and 1.0. We hypothesize that our sampling step makes the algorithm robust to the initial value of \u03b1. We run the ns-CRP clustering algorithm for 100 iterations for different starting values of \u03b1 on each of the pruned datasets. The results of the experiment are given in table 6 for \u03b1 = 0.001. The B-cubed F-scores and ARI scores are quite similar for other initial values of \u03b1, and therefore we do not present those results to avoid repetition. These results suggest that the ns-CRP algorithm is not sensitive to the value of initial \u03b1. Here, we investigate the stability of the ns-CRP algorithm by plotting the B-cubed F-scores against the number of iterations for 30 random meanings from the Indo-European dataset in figure 4. The plot shows that the ns-CRP algorithm quickly moves from an initial configuration with low Fscore to a configuration that has high F-scores within the first 20 iterations. We observe similar behaviour of ns-CRP in the case of other language families. In conclusion, the plot shows that the quality of the clusters inferred by the ns-CRP algorithm achieves a high F-score. Moreover, the cluster quality does not change drastically after reaching a local optimum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Family",
"sec_num": null
},
{
"text": "In this subsection, we analyze the difference in the behaviours of sd-CRP algorithms. If w i and w j are cognate and w j and w k are cognate, then all the three words are cognate with each other which follows from the definition of cognacy. The sb-CRP algorithm captures this cognacy relation through the SitBehind function. During cluster formation, w i only has to connect to a word that might have no other words other than itself sitting behind it. We hypothesize that the sb-CRP algorithm would be more efficient at identifying partial cognates where only part of the lexical material is cognate with another word. An example of a partial cognate is the meaning of meat in sweetmeat which is cognate with Swedish mat 'food' (Campbell, 2004) . In contrast, the ns-CRP algorithm is stricter than sb-CRP algorithm in that a word is assigned to the cluster with which it has the highest net similarity. If a word has net similarity of zero with all the existing clusters, then, the word would form its own cluster since \u03b1S(w i , w i ) is always positive.",
"cite_spans": [
{
"start": 729,
"end": 745,
"text": "(Campbell, 2004)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of sd-CRP algorithms",
"sec_num": "7.5"
},
{
"text": "We presented and compared the performance of two similarity dependent Chinese Restaurant process algorithms at the task of automated cognate detection for six different language families. The sensitivity experiments suggested that the sd-CRP algorithms is not sensitive to initial \u03b1 and missing data. The feature ablation experiments suggest that the inclusion of PMI and SCA features improve the performance of the sd-CRP algorithms. We conclude that the sd-CRP algorithms perform better than the existing clustering algorithms across multiple settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "As future work, we plan to include language relatedness as features into SVM training and also train the SVM classifier in an unsupervised fashion using the sd-CRP algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
}
],
"back_matter": [
{
"text": "The author thanks the anonymous reviewers for the comments which helped improved the paper. The author is supported by BIGMED project (a Norwegian Research Council LightHouse grant, see bigmed.no). The algorithms were designed when the author took part in the ERC Advanced Grant 324246 EVOLAEMP project led by Gerhard J\u00e4ger. All these sources of support are gratefully acknowledged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "The code and data used in this paper are uploaded as a zip file along with this paper. In addition, they are available for download at:https://github.com/PhyloStar/ sd-CRP-cognates",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Supplemental Material",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A comparison of extrinsic clustering evaluation metrics based on formal constraints",
"authors": [
{
"first": "Enrique",
"middle": [],
"last": "Amig\u00f3",
"suffix": ""
},
{
"first": "Julio",
"middle": [],
"last": "Gonzalo",
"suffix": ""
},
{
"first": "Javier",
"middle": [],
"last": "Artiles",
"suffix": ""
},
{
"first": "Felisa",
"middle": [],
"last": "Verdejo",
"suffix": ""
}
],
"year": 2009,
"venue": "Information retrieval",
"volume": "12",
"issue": "4",
"pages": "461--486",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enrique Amig\u00f3, Julio Gonzalo, Javier Artiles, and Felisa Verdejo. 2009. A comparison of extrinsic clustering evaluation metrics based on formal con- straints. Information retrieval, 12(4):461-486.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Distance dependent chinese restaurant processes",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Peter I",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frazier",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2461--2488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M Blei and Peter I Frazier. 2011. Distance de- pendent chinese restaurant processes. Journal of Machine Learning Research, 12(Aug):2461-2488.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A hierarchical pitman-yor process hmm for unsupervised part of speech induction",
"authors": [
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "865--874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phil Blunsom and Trevor Cohn. 2011. A hierarchical pitman-yor process hmm for unsupervised part of speech induction. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies-Volume 1, pages 865-874. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automated reconstruction of ancient languages using probabilistic models of sound change",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Bouchard-C\u00f4t\u00e9",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "110",
"issue": "",
"pages": "4224--4229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre Bouchard-C\u00f4t\u00e9, David Hall, Thomas L. Griffiths, and Dan Klein. 2013. Automated recon- struction of ancient languages using probabilistic models of sound change. Proceedings of the Na- tional Academy of Sciences, 110(11):4224-4229.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Mapping the origins and expansion of the Indo-European language family",
"authors": [
{
"first": "Remco",
"middle": [],
"last": "Bouckaert",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Lemey",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Dunn",
"suffix": ""
},
{
"first": "Simon",
"middle": [
"J"
],
"last": "Greenhill",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"V"
],
"last": "Alekseyenko",
"suffix": ""
},
{
"first": "Alexei",
"middle": [
"J"
],
"last": "Drummond",
"suffix": ""
},
{
"first": "Russell",
"middle": [
"D"
],
"last": "Gray",
"suffix": ""
},
{
"first": "Marc",
"middle": [
"A"
],
"last": "Suchard",
"suffix": ""
},
{
"first": "Quentin",
"middle": [
"D"
],
"last": "Atkinson",
"suffix": ""
}
],
"year": 2012,
"venue": "Science",
"volume": "337",
"issue": "6097",
"pages": "957--960",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Remco Bouckaert, Philippe Lemey, Michael Dunn, Simon J. Greenhill, Alexander V. Alekseyenko, Alexei J. Drummond, Russell D. Gray, Marc A. Suchard, and Quentin D. Atkinson. 2012. Mapping the origins and expansion of the Indo-European lan- guage family. Science, 337(6097):957-960.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "API design for machine learning software: experiences from the scikit-learn project",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Buitinck",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Louppe",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Vlad",
"middle": [],
"last": "Niculae",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Jaques",
"middle": [],
"last": "Grobler",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Layton",
"suffix": ""
},
{
"first": "Jake",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "Arnaud",
"middle": [],
"last": "Joly",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Holt",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
}
],
"year": 2013,
"venue": "ECML PKDD Workshop: Languages for Data Mining and Machine Learning",
"volume": "",
"issue": "",
"pages": "108--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Peter Prettenhofer, Alexandre Gramfort, Jaques Grobler, Robert Layton, Jake VanderPlas, Arnaud Joly, Brian Holt, and Ga\u00ebl Varoquaux. 2013. API design for machine learning software: experi- ences from the scikit-learn project. In ECML PKDD Workshop: Languages for Data Mining and Ma- chine Learning, pages 108-122.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Historical Linguistics: An Introduction",
"authors": [
{
"first": "Lyle",
"middle": [],
"last": "Campbell",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lyle Campbell. 2004. Historical Linguistics: An Intro- duction. Edinburgh University Press, Edinburgh.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Ancestry-constrained phylogenetic analysis supports the Indo-European steppe hypothesis",
"authors": [
{
"first": "Will",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Chundra",
"middle": [],
"last": "Cathcart",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Garrett",
"suffix": ""
}
],
"year": 2015,
"venue": "Language",
"volume": "91",
"issue": "1",
"pages": "194--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Will Chang, Chundra Cathcart, David Hall, and An- drew Garrett. 2015. Ancestry-constrained phyloge- netic analysis supports the Indo-European steppe hy- pothesis. Language, 91(1):194-244.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An algorithm to align words for historical comparison",
"authors": [
{
"first": "Michael",
"middle": [
"A"
],
"last": "Covington",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "4",
"pages": "481--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael A. Covington. 1996. An algorithm to align words for historical comparison. Computational Linguistics, 22(4):481-496.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A tutorial on bayesian nonparametric models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Gershman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Mathematical Psychology",
"volume": "56",
"issue": "1",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel J Gershman and David M Blei. 2012. A tuto- rial on bayesian nonparametric models. Journal of Mathematical Psychology, 56(1):1-12.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Contextual dependencies in unsupervised word segmentation",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "673--680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater, Thomas L Griffiths, and Mark John- son. 2006. Contextual dependencies in unsuper- vised word segmentation. In Proceedings of the 21st International Conference on Computational Lin- guistics and the 44th annual meeting of the Associa- tion for Computational Linguistics, pages 673-680. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Producing power-law distributions and damping word frequencies with two-stage language models",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2335--2382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater, Thomas L Griffiths, and Mark John- son. 2011. Producing power-law distributions and damping word frequencies with two-stage language models. Journal of Machine Learning Research, 12(Jul):2335-2382.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Language phylogenies reveal expansion pulses and pauses in pacific settlement. science",
"authors": [
{
"first": "D",
"middle": [],
"last": "Russell",
"suffix": ""
},
{
"first": "Alexei",
"middle": [
"J"
],
"last": "Gray",
"suffix": ""
},
{
"first": "Simon",
"middle": [
"J"
],
"last": "Drummond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Greenhill",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "323",
"issue": "",
"pages": "479--483",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Russell D Gray, Alexei J Drummond, and Simon J Greenhill. 2009. Language phylogenies reveal ex- pansion pulses and pauses in pacific settlement. sci- ence, 323(5913):479-483.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Large-scale cognate recovery",
"authors": [
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "344--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Hall and Dan Klein. 2011. Large-scale cognate recovery. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing, pages 344-354. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Clustering semantically equivalent words into cognate sets in multilingual lists",
"authors": [
{
"first": "Bradley",
"middle": [],
"last": "Hauer",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "865--873",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bradley Hauer and Grzegorz Kondrak. 2011. Clus- tering semantically equivalent words into cognate sets in multilingual lists. In Proceedings of 5th In- ternational Joint Conference on Natural Language Processing, pages 865-873, Chiang Mai, Thailand. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Comparing partitions",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Hubert",
"suffix": ""
},
{
"first": "Phipps",
"middle": [],
"last": "Arabie",
"suffix": ""
}
],
"year": 1985,
"venue": "Journal of classification",
"volume": "2",
"issue": "1",
"pages": "193--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence Hubert and Phipps Arabie. 1985. Compar- ing partitions. Journal of classification, 2(1):193- 218.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Phylogenetic inference from word lists using weighted alignment with empirically determined weights",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
}
],
"year": 2013,
"venue": "Language Dynamics and Change",
"volume": "3",
"issue": "2",
"pages": "245--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard J\u00e4ger. 2013. Phylogenetic inference from word lists using weighted alignment with empiri- cally determined weights. Language Dynamics and Change, 3(2):245-291.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Using ancestral state reconstruction methods for onomasiological reconstruction in multilingual word lists. Forthcoming, Language Dynamics and Change",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
},
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard J\u00e4ger and Johann-Mattis List. 2017. Using ancestral state reconstruction methods for onoma- siological reconstruction in multilingual word lists. Forthcoming, Language Dynamics and Change.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Using support vector machines and state-of-the-art algorithms for phonetic alignment to identify cognates in multi-lingual wordlists",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
},
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Sofroniev",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "1",
"issue": "",
"pages": "1205--1216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard J\u00e4ger, Johann-Mattis List, and Pavel Sofroniev. 2017. Using support vector ma- chines and state-of-the-art algorithms for phonetic alignment to identify cognates in multi-lingual wordlists. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1205-1216. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Identifying cognates by phonetic and semantic similarity",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2001,
"venue": "North American Chapter Of The Association For Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak. 2001. Identifying cognates by pho- netic and semantic similarity. In North American Chapter Of The Association For Computational Lin- guistics, pages 1-8. Association for Computational Linguistics Morristown, NJ, USA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Algorithms for language reconstruction",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak. 2002. Algorithms for language re- construction. Ph.D. thesis, University of Toronto, Ontario, Canada.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Cognates and word alignment in bitexts",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Tenth Machine Translation Summit (MT Summit X)",
"volume": "",
"issue": "",
"pages": "305--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak. 2005. Cognates and word align- ment in bitexts. In Proceedings of the Tenth Ma- chine Translation Summit (MT Summit X), pages 305-312.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Identification of cognates and recurrent sound correspondences in word lists",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2009,
"venue": "Traitement Automatique des Langues et Langues Anciennes",
"volume": "50",
"issue": "2",
"pages": "201--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak. 2009. Identification of cognates and recurrent sound correspondences in word lists. Traitement Automatique des Langues et Langues Anciennes, 50(2):201-235.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Revisiting kmeans",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Kulis",
"suffix": ""
},
{
"first": "Michael I Jordan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "New algorithms via Bayesian nonparametrics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1111.0352"
]
},
"num": null,
"urls": [],
"raw_text": "Brian Kulis and Michael I Jordan. 2011. Revisiting k- means: New algorithms via Bayesian nonparamet- rics. arXiv preprint arXiv:1111.0352.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Efficiency of Markov chain Monte Carlo tree proposals in Bayesian phylogenetics",
"authors": [
{
"first": "Clemens",
"middle": [],
"last": "Lakner",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Van Der",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Bret",
"middle": [],
"last": "Huelsenbeck",
"suffix": ""
},
{
"first": "Fredrik",
"middle": [],
"last": "Larget",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ronquist",
"suffix": ""
}
],
"year": 2008,
"venue": "Systematic biology",
"volume": "57",
"issue": "1",
"pages": "86--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clemens Lakner, Paul Van Der Mark, John P Huelsen- beck, Bret Larget, and Fredrik Ronquist. 2008. Ef- ficiency of Markov chain Monte Carlo tree propos- als in Bayesian phylogenetics. Systematic biology, 57(1):86-103.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "LexStat: Automatic detection of cognates in multilingual wordlists",
"authors": [
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the EACL 2012 Joint Workshop of LINGVIS & UNCLH",
"volume": "",
"issue": "",
"pages": "117--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johann-Mattis List. 2012a. LexStat: Automatic de- tection of cognates in multilingual wordlists. In Proceedings of the EACL 2012 Joint Workshop of LINGVIS & UNCLH, pages 117-125, Avignon, France. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "SCA: phonetic alignment based on sound classes",
"authors": [
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
}
],
"year": 2012,
"venue": "New Directions in Logic, Language and Computation",
"volume": "",
"issue": "",
"pages": "32--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johann-Mattis List. 2012b. SCA: phonetic alignment based on sound classes. In New Directions in Logic, Language and Computation, pages 32-51. Springer.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Sequence comparison in historical linguistics",
"authors": [
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johann-Mattis List. 2014. Sequence comparison in historical linguistics. D\u00fcsseldorf University Press, D\u00fcsseldorf.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Lingpy. a python library for quantitative tasks in historical linguistics",
"authors": [
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Greenhill",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Forkel",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johann-Mattis List, Simon Greenhill, and Robert Forkel. 2017a. Lingpy. a python library for quan- titative tasks in historical linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The potential of automatic word comparison for historical linguistics",
"authors": [
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
},
{
"first": "Simon",
"middle": [
"J"
],
"last": "Greenhill",
"suffix": ""
},
{
"first": "Russell",
"middle": [
"D"
],
"last": "Gray",
"suffix": ""
}
],
"year": 2017,
"venue": "PLOS ONE",
"volume": "12",
"issue": "1",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johann-Mattis List, Simon J. Greenhill, and Russell D. Gray. 2017b. The potential of automatic word comparison for historical linguistics. PLOS ONE, 12(1):1-18.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Using sequence similarity networks to identify partial cognates in multilingual wordlists",
"authors": [
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Bapteste",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "599--605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johann-Mattis List, Philippe Lopez, and Eric Bapteste. 2016. Using sequence similarity networks to iden- tify partial cognates in multilingual wordlists. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 599-605, Berlin, Germany. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A central asian language survey",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Mennecier",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Nerbonne",
"suffix": ""
},
{
"first": "Evelyne",
"middle": [],
"last": "Heyer",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Manni",
"suffix": ""
}
],
"year": 2016,
"venue": "Language Dynamics and Change",
"volume": "6",
"issue": "1",
"pages": "57--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philippe Mennecier, John Nerbonne, Evelyne Heyer, and Franz Manni. 2016. A central asian language survey. Language Dynamics and Change, 6(1):57- 98.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A general method applicable to the search for similarities in the amino acid sequence of two proteins",
"authors": [
{
"first": "B",
"middle": [],
"last": "Saul",
"suffix": ""
},
{
"first": "Christian",
"middle": [
"D"
],
"last": "Needleman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wunsch",
"suffix": ""
}
],
"year": 1970,
"venue": "Journal of Molecular Biology",
"volume": "48",
"issue": "3",
"pages": "443--453",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saul B. Needleman and Christian D. Wunsch. 1970. A general method applicable to the search for simi- larities in the amino acid sequence of two proteins. Journal of Molecular Biology, 48(3):443-453.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Automatic cognate identification with gap-weighted string subsequences",
"authors": [
{
"first": "Taraka",
"middle": [],
"last": "Rama",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1227--1231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taraka Rama. 2015. Automatic cognate identification with gap-weighted string subsequences. In Proceed- ings of the 2015 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies., pages 1227-1231.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Siamese convolutional networks for cognate identification",
"authors": [
{
"first": "Taraka",
"middle": [],
"last": "Rama",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COL-ING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1018--1027",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taraka Rama. 2016. Siamese convolutional networks for cognate identification. In Proceedings of COL- ING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1018-1027.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Two methods for automatic identification of cognates",
"authors": [
{
"first": "Taraka",
"middle": [],
"last": "Rama",
"suffix": ""
},
{
"first": "Prasant",
"middle": [],
"last": "Kolachina",
"suffix": ""
},
{
"first": "Sudheer",
"middle": [],
"last": "Kolachina",
"suffix": ""
}
],
"year": 2013,
"venue": "QITL",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taraka Rama, Prasant Kolachina, and Sudheer Ko- lachina. 2013. Two methods for automatic identi- fication of cognates. QITL, 5:76.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Are automatic methods for cognate detection good enough for phylogenetic reconstruction in historical linguistics?",
"authors": [
{
"first": "Taraka",
"middle": [],
"last": "Rama",
"suffix": ""
},
{
"first": "Johann-Mattis",
"middle": [],
"last": "List",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Wahle",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "393--400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taraka Rama, Johann-Mattis List, Johannes Wahle, and Gerhard J\u00e4ger. 2018. Are automatic methods for cognate detection good enough for phylogenetic re- construction in historical linguistics? In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 393-400.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Fast and unsupervised methods for multilingual cognate clustering",
"authors": [
{
"first": "Taraka",
"middle": [],
"last": "Rama",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Wahle",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Sofroniev",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.04938"
]
},
"num": null,
"urls": [],
"raw_text": "Taraka Rama, Johannes Wahle, Pavel Sofroniev, and Gerhard J\u00e4ger. 2017. Fast and unsupervised meth- ods for multilingual cognate clustering. arXiv preprint arXiv:1702.04938.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Deciphering foreign language",
"authors": [
{
"first": "Sujith",
"middle": [],
"last": "Ravi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "12--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sujith Ravi and Kevin Knight. 2011. Deciphering for- eign language. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies-Volume 1, pages 12-21. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Maps of random walks on complex networks reveal community structure",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Rosvall",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Carl",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bergstrom",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "105",
"issue": "4",
"pages": "1118--1123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Rosvall and Carl T Bergstrom. 2008. Maps of random walks on complex networks reveal commu- nity structure. Proceedings of the National Academy of Sciences, 105(4):1118-1123.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Austroasiatic lexical data set for phylogenetic analyses 2015 version",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Sidwell",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Sidwell. 2015. Austroasiatic lexical data set for phylogenetic analyses 2015 version.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Pos induction with distributional and morphological information using a distance-dependent chinese restaurant process",
"authors": [
{
"first": "Kairit",
"middle": [],
"last": "Sirts",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "265--271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kairit Sirts, Jacob Eisenstein, Micha Elsner, and Sharon Goldwater. 2014. Pos induction with dis- tributional and morphological information using a distance-dependent chinese restaurant process. In ACL (2), pages 265-271.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A statistical model for lost language decipherment",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Snyder",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1048--1057",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Snyder, Regina Barzilay, and Kevin Knight. 2010. A statistical model for lost language deci- pherment. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguis- tics, pages 1048-1057. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Spectral chinese restaurant processes: Nonparametric clustering based on similarities",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Maas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "AISTATS",
"volume": "",
"issue": "",
"pages": "698--706",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Andrew L Maas, and Christopher D Manning. 2011. Spectral chinese restaurant pro- cesses: Nonparametric clustering based on similari- ties. In AISTATS, pages 698-706.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A statistical method for evaluating systematic relationships",
"authors": [
{
"first": "R",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"D"
],
"last": "Sokal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Michener",
"suffix": ""
}
],
"year": 1958,
"venue": "",
"volume": "38",
"issue": "",
"pages": "1409--1438",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert R Sokal and Charles D Michener. 1958. A statistical method for evaluating systematic rela- tionships. University of Kansas Science Bulletin, 38:1409-1438.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Identifying cognate sets across dictionaries of related languages",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "St Arnaud",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2509--2518",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam St Arnaud, David Beck, and Grzegorz Kondrak. 2017. Identifying cognate sets across dictionaries of related languages. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, pages 2509-2518.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Lexico-statistic dating of prehistoric ethnic contacts: with special reference to North American Indians and Eskimos",
"authors": [
{
"first": "Morris",
"middle": [],
"last": "Swadesh",
"suffix": ""
}
],
"year": 1952,
"venue": "Proceedings of the American philosophical society",
"volume": "96",
"issue": "",
"pages": "452--463",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morris Swadesh. 1952. Lexico-statistic dating of pre- historic ethnic contacts: with special reference to North American Indians and Eskimos. Proceedings of the American philosophical society, 96(4):452- 463.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Languages with longer words have more lexical change",
"authors": [
{
"first": "S\u00f8ren",
"middle": [],
"last": "Wichmann",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"W"
],
"last": "Holman",
"suffix": ""
}
],
"year": 2013,
"venue": "Approaches to Measuring Linguistic Differences",
"volume": "",
"issue": "",
"pages": "249--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00f8ren Wichmann and Eric W Holman. 2013. Lan- guages with longer words have more lexical change. In Approaches to Measuring Linguistic Differences, pages 249-281. Mouton de Gruyter.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "If arg max k s ik < \u03b1S(wi, wi) assign wi to a new cluster. -Else, assign wi to the cluster k where k = arg max k s ik .",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "If arg max k s ik < \u03b1S(wi, wi) assign wi to a new cluster. -Else, link wi to a word w k where k = arg max k s ik .\u2022 Sample \u03b1 using a Metropolis-Hastings step4.2.2 sb-CRPThe sd-CRP variant of Socher et al. (2011) forms a directed link from word w i to a different word w \u2212i based on the SITBEHIND function. We call this variant of sd-CRP algorithm as sb-CRP after SitBehind function. The function SitBehind(w i ) is recursive in nature and returns the set of words from which there is a path to w i including itself. A directed link between w i to itself indicates that there is no path from w i to any other word and sb-CRP clustering for meaning fish. Vertices (words) with the same color are cognates.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "The B-cubed F-scores are shown in the top row. The bottom row shows the ARI scores for each of the datasets. The horizontal bar shows the median score and the mean of the scores is shown by .",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "The top row shows the B-cubed F-scores and the bottom row shows the ARI scores for pruned datasets of five language families.",
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"text": "",
"uris": null
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"text": "Plot showing the convergence of the sd-CRP algorithm for 30 meanings from the Indo-European dataset.",
"uris": null
},
"TABREF1": {
"content": "<table><tr><td>: Excerpt of the Indo-European word list</td></tr><tr><td>(from our dataset) in ASJP code for five languages</td></tr><tr><td>belonging to Germanic (English, German, and</td></tr><tr><td>Swedish) and Romance (Spanish and French) sub-</td></tr><tr><td>families. Cognates are indicated with the same su-</td></tr><tr><td>perscript.</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
},
"TABREF2": {
"content": "<table><tr><td colspan=\"4\">Afrasian (21/40), Kadai (12/40), Kamasau</td></tr><tr><td colspan=\"4\">(8/36), Lolo-Burmese (15/40), Mayan</td></tr><tr><td colspan=\"4\">(30/100), Miao-Yao (6/36), Mixe-Zoque</td></tr><tr><td colspan=\"4\">(10/100), Mon-Khmer (16/100), Bai dialects</td></tr><tr><td colspan=\"4\">(9/110), Chinese dialects (18/180), Japanese</td></tr><tr><td colspan=\"4\">(10/200), ObUgrian (21/110; Hungarian</td></tr><tr><td colspan=\"4\">excluded from Ugric sub-family).</td></tr><tr><td colspan=\"4\">We extracted a total of 48,389 cognate pairs</td></tr><tr><td colspan=\"4\">(positive) and 51,452 non-cognate pairs (negative)</td></tr><tr><td colspan=\"3\">for training our SVM model.</td><td/></tr><tr><td>Dataset</td><td colspan=\"2\">Meanings Languages</td><td>Source</td></tr><tr><td>Austronesian</td><td>210</td><td>395</td><td>Gray et al. (2009)</td></tr><tr><td>Austro-Asiatic</td><td>200</td><td>122</td><td>Sidwell (2015)</td></tr><tr><td>Indo-European</td><td>208</td><td>52</td><td>Bouckaert et al. (2012)</td></tr><tr><td>Central Asian dialects</td><td>183</td><td>88</td><td>Mennecier et al. (2016)</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "Test datasetsWe test our clustering algorithms on word lists belonging to four language families given in table 2."
},
"TABREF3": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": "The second, third, and fourth columns show the number of number of meanings, languages, and the source of each dataset respectively."
},
"TABREF6": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": "Results of feature ablation experiments on Austronesian and Austro-Asiatic datasets."
},
"TABREF8": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
},
"TABREF9": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": "The mean and standard deviation of the F-scores and ARI scores for \u03b1 = 0.001 on pruned datasets."
}
}
}
}