ACL-OCL / Base_JSON /prefixP /json /P10 /P10-1039.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P10-1039",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:20:49.602004Z"
},
"title": "Enhanced word decomposition by calibrating the decision threshold of probabilistic models and using a model ensemble",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Spiegler",
"suffix": "",
"affiliation": {
"laboratory": "Intelligent Systems Laboratory",
"institution": "University of Bristol",
"location": {
"country": "U.K"
}
},
"email": "spiegler@cs.bris.ac.uk"
},
{
"first": "Peter",
"middle": [
"A"
],
"last": "Flach",
"suffix": "",
"affiliation": {
"laboratory": "Intelligent Systems Laboratory",
"institution": "University of Bristol",
"location": {
"country": "U.K"
}
},
"email": "peter.flach@bristol.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper demonstrates that the use of ensemble methods and carefully calibrating the decision threshold can significantly improve the performance of machine learning methods for morphological word decomposition. We employ two algorithms which come from a family of generative probabilistic models. The models consider segment boundaries as hidden variables and include probabilities for letter transitions within segments. The advantage of this model family is that it can learn from small datasets and easily generalises to larger datasets. The first algorithm PROMODES, which participated in the Morpho Challenge 2009 (an international competition for unsupervised morphological analysis) employs a lower order model whereas the second algorithm PROMODES-H is a novel development of the first using a higher order model. We present the mathematical description for both algorithms, conduct experiments on the morphologically rich language Zulu and compare characteristics of both algorithms based on the experimental results.",
"pdf_parse": {
"paper_id": "P10-1039",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper demonstrates that the use of ensemble methods and carefully calibrating the decision threshold can significantly improve the performance of machine learning methods for morphological word decomposition. We employ two algorithms which come from a family of generative probabilistic models. The models consider segment boundaries as hidden variables and include probabilities for letter transitions within segments. The advantage of this model family is that it can learn from small datasets and easily generalises to larger datasets. The first algorithm PROMODES, which participated in the Morpho Challenge 2009 (an international competition for unsupervised morphological analysis) employs a lower order model whereas the second algorithm PROMODES-H is a novel development of the first using a higher order model. We present the mathematical description for both algorithms, conduct experiments on the morphologically rich language Zulu and compare characteristics of both algorithms based on the experimental results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Words are often considered as the smallest unit of a language when examining the grammatical structure or the meaning of sentences, referred to as syntax and semantics, however, words themselves possess an internal structure denominated by the term word morphology. It is worthwhile studying this internal structure since a language description using its morphological formation is more compact and complete than listing all possible words. This study is called morphological analysis. According to Goldsmith (2009) four tasks are assigned to morphological analysis: word decomposition into morphemes, building morpheme dictionaries, defining morphosyntactical rules which state how morphemes can be combined to valid words and defining morphophonological rules that specify phonological changes morphemes undergo when they are combined to words. Results of morphological analysis are applied in speech synthesis (Sproat, 1996) and recognition (Hirsimaki et al., 2006) , machine translation (Amtrup, 2003) and information retrieval (Kettunen, 2009) .",
"cite_spans": [
{
"start": 499,
"end": 515,
"text": "Goldsmith (2009)",
"ref_id": "BIBREF6"
},
{
"start": 913,
"end": 927,
"text": "(Sproat, 1996)",
"ref_id": "BIBREF26"
},
{
"start": 944,
"end": 968,
"text": "(Hirsimaki et al., 2006)",
"ref_id": "BIBREF9"
},
{
"start": 991,
"end": 1005,
"text": "(Amtrup, 2003)",
"ref_id": "BIBREF0"
},
{
"start": 1032,
"end": 1048,
"text": "(Kettunen, 2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the past years, there has been a lot of interest and activity in the development of algorithms for morphological analysis. All these approaches have in common that they build a morphological model which is then applied to analyse words. Models are constructed using rule-based methods (Mooney and Califf, 1996; Muggleton and Bain, 1999) , connectionist methods (Rumelhart and McClelland, 1986; Gasser, 1994) or statistical or probabilistic methods (Harris, 1955; Hafer and Weiss, 1974) . Another way of classifying approaches is based on the learning aspect during the construction of the morphological model. If the data for training the model has the same structure as the desired output of the morphological analysis, in other words, if a morphological model is learnt from labelled data, the algorithm is classified under supervised learning. An example for a supervised algorithm is given by Oflazer et al. (2001) . If the input data has no information towards the desired output of the analysis, the algorithm uses unsupervised learning. Unsupervised algorithms for morphological analysis are Linguistica (Goldsmith, 2001) , Morfessor (Creutz, 2006) and Paramor (Monson, 2008) . Minimally or semi-supervised algorithms are provided with partial information during the learning process. This has been done, for instance, by Shalonova et al. (2009) who provided stems in addition to a word list in order to find multiple pre-and suffixes. A comparison of different levels of supervision for morphology learning on Zulu has been carried out by Spiegler et al. (2008) .",
"cite_spans": [
{
"start": 288,
"end": 313,
"text": "(Mooney and Califf, 1996;",
"ref_id": "BIBREF14"
},
{
"start": 314,
"end": 339,
"text": "Muggleton and Bain, 1999)",
"ref_id": "BIBREF15"
},
{
"start": 364,
"end": 396,
"text": "(Rumelhart and McClelland, 1986;",
"ref_id": "BIBREF18"
},
{
"start": 397,
"end": 410,
"text": "Gasser, 1994)",
"ref_id": "BIBREF4"
},
{
"start": 451,
"end": 465,
"text": "(Harris, 1955;",
"ref_id": "BIBREF8"
},
{
"start": 466,
"end": 488,
"text": "Hafer and Weiss, 1974)",
"ref_id": "BIBREF7"
},
{
"start": 900,
"end": 921,
"text": "Oflazer et al. (2001)",
"ref_id": "BIBREF16"
},
{
"start": 1114,
"end": 1131,
"text": "(Goldsmith, 2001)",
"ref_id": "BIBREF5"
},
{
"start": 1144,
"end": 1158,
"text": "(Creutz, 2006)",
"ref_id": "BIBREF2"
},
{
"start": 1171,
"end": 1185,
"text": "(Monson, 2008)",
"ref_id": "BIBREF13"
},
{
"start": 1332,
"end": 1355,
"text": "Shalonova et al. (2009)",
"ref_id": "BIBREF19"
},
{
"start": 1550,
"end": 1572,
"text": "Spiegler et al. (2008)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "Our two algorithms, PROMODES and PROMODES-H, perform word decomposition and are based on probabilistic methods by incorporating a probabilistic generative model. 1 Their parameters can be estimated from either labelled data, using maximum likelihood estimates, or from unlabelled data by expectation maximization 2 which makes them either supervised or unsupervised algorithms.",
"cite_spans": [
{
"start": 162,
"end": 163,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "The purpose of this paper is an analysis of the underlying probabilistic models and the types of errors committed by each one. Furthermore, it is investigated how the decision threshold can be calibrated and a model ensemble is tested.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "The remainder is structured as follows. In Section 2 we introduce the probabilistic generative process and show in Sections 2.1 and 2.2 how we incorporate this process in PROMODES and PROMODES-H. We start our experiments with examining the learning behaviour of the algorithms in 3.1. Subsequently, we perform a position-wise comparison of predictions in 3.2, show how we find a better decision threshold for placing morpheme boundaries in 3.3 and combine both algorithms using a model ensemble to leverage individual strengths in 3.4. In 3.5 we examine how the single algorithms contribute to the result of the ensemble. In Section 4 we will compare our approaches to related work and in Section 5 we will draw our conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "Intuitively, we could say that our models describe the process of word generation from the left to the right by alternately using two dice, the first for deciding whether to place a morpheme boundary in the current word position and the second to get a corresponding letter transition. We are trying to reverse this process in order to find the underlying sequence of tosses which determine the morpheme boundaries. We are applying the notion of a prob-1 PROMODES stands for PRObabilistic MOdel for different DEgrees of Supervision. The H of PROMODES-H refers to Higher order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic generative model",
"sec_num": "2"
},
{
"text": "2 In Spiegler et al., 2010a) we have presented an unsupervised version of PROMODES.",
"cite_spans": [
{
"start": 5,
"end": 28,
"text": "Spiegler et al., 2010a)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic generative model",
"sec_num": "2"
},
{
"text": "abilistic generative process consisting of words as observed variables X and their hidden segmentation as latent variables Y . If a generative model is fully parameterised it can be reversed to find the underlying word decomposition by forming the conditional probability distribution Pr(Y |X).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic generative model",
"sec_num": "2"
},
{
"text": "Let us first define the model-independent components. A given word w j \u2208 W with 1 \u2264 j \u2264 |W | consists of n letters and has m = n \u2212 1 positions for inserting boundaries. A word's segmentation is depicted as a boundary vector b j = (b j1 , . . . , b jm ) consisting of boundary values b ji \u2208 {0, 1} with 1 \u2264 i \u2264 m which disclose whether or not a boundary is placed in position i. A letter l j,i-1 precedes the position i in w j and a letter l ji follows it. Both letters l j,i-1 and l ji are part of an alphabet. Furthermore, we introduce a letter transition t ji which goes from l j,i-1 to l ji .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic generative model",
"sec_num": "2"
},
{
"text": "PROMODES is based on a zero-order model for boundaries b ji and on a first-order model for letter transitions t ji . It describes a word's segmentation by its morpheme boundaries and resulting letter transitions within morphemes. A boundary vector b j is found by evaluating each position i with arg max",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES",
"sec_num": "2.1"
},
{
"text": "b ji Pr(b ji |t ji ) = (1) arg max b ji Pr(b ji )Pr(t ji |b ji ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES",
"sec_num": "2.1"
},
{
"text": "The first component of the equation above is the probability distribution over non-/boundaries Pr(b ji ). We assume that a boundary in i is inserted independently from other boundaries (zeroorder) and the graphemic representation of the word, however, is conditioned on the length of the word m j which means that the probability distribution is in fact Pr(b ji |m j ). We guarantee \u2211 1 r=0 Pr(b ji =r|m j ) = 1. To simplify the notation in later explanations, we will refer to Pr(b ji |m j ) as Pr(b ji ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES",
"sec_num": "2.1"
},
{
"text": "The second component is the letter transition probability distribution Pr(t ji |b ji ). We suppose a first-order Markov chain consisting of transitions t ji from letter l j,i-1 \u2208 A B to letter l ji \u2208 A where A is a regular letter alphabet and A B =A \u222a {B} includes B as an abstract morpheme start symbol which can occur in l j,i-1 . For instance, the suffix 's' of the verb form gets, marking 3 rd person singular, would be modelled as B \u2192 s whereas a morpheme internal transition could be g \u2192 e. We guarantee \u2211 l ji \u2208A Pr(t ji |b ji )=1 with t ji being a transition from a certain l j,i\u22121 \u2208 A B to l ji . The advantage of the model is that instead of evaluating an exponential number of possible segmentations (2 m ), the best segmentation b * j =(b * j1 , . . . , b * jm ) is found with 2m position-wise evaluations using",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES",
"sec_num": "2.1"
},
{
"text": "b * ji = arg max b ji Pr(b ji |t ji ) (2) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 1, if Pr(b ji =1)Pr(t ji |b ji =1) > Pr(b ji =0)Pr(t ji |b ji =0) 0, otherwise .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES",
"sec_num": "2.1"
},
{
"text": "The simplifying assumptions made, however, reduce the expressive power of the model by not allowing any dependencies on preceding boundaries or letters. This can lead to over-segmentation and therefore influences the performance of PRO-MODES. For this reason, we have extended the model which led to PROMODES-H, a higher-order probabilistic model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES",
"sec_num": "2.1"
},
{
"text": "In contrast to the original PROMODES model, we also consider the boundary value b j,i-1 and modify our transition assumptions for PROMODES-H in such a way that the new algorithm applies a first-order boundary model and a second-order transition model. A transition t ji is now defined as a transition from an abstract symbol in l j,i-1 \u2208 {N , B} to a letter in l ji \u2208 A. The abstract symbol is N or B depending on whether b ji is 0 or 1. This holds equivalently for letter transitions t j,i-1 . The suffix of our previous example gets would be modelled N \u2192 t \u2192 B \u2192 s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES-H",
"sec_num": "2.2"
},
{
"text": "Our boundary vector b j is then constructed from arg max",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES-H",
"sec_num": "2.2"
},
{
"text": "b ji Pr(b ji |t ji ,t j,i-1 , b j,i-1 ) = (3) arg max b ji Pr(b ji |b j,i-1 )Pr(t ji |b ji ,t j,i-1 , b j,i-1 ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES-H",
"sec_num": "2.2"
},
{
"text": "The first component, the probability distribution over non-/boundaries Pr(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES-H",
"sec_num": "2.2"
},
{
"text": "b ji |b j,i-1 ), satisfies \u2211 1 r=0 Pr(b ji =r|b j,i-1 )=1 with b j,i-1 , b ji \u2208 {0, 1}. As for PROMODES, Pr(b ji |b j,i-1 ) is short- hand for Pr(b ji |b j,i-1 , m j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES-H",
"sec_num": "2.2"
},
{
"text": "The second component, the letter transition proba-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES-H",
"sec_num": "2.2"
},
{
"text": "bility distribution Pr(t ji |b ji , b j,i-1 ), fulfils \u2211 l ji \u2208A Pr(t ji |b ji ,t j,i-1 , b j,i-1 )=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES-H",
"sec_num": "2.2"
},
{
"text": "with t ji being a transition from a certain l j,i\u22121 \u2208 A B to l ji . Once again, we find the word's best segmentation b * j in 2m evaluations with",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES-H",
"sec_num": "2.2"
},
{
"text": "b * ji = arg max b ji Pr(b ji |t ji ,t j,i-1 , b j,i-1 ) = (4) \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1, if Pr(b ji =1|b j,i-1 )Pr(t ji |b ji =1,t j,i-1 , b j,i-1 ) > Pr(b ji =0|b j,i-1 )Pr(t ji |b ji =0,t j,i-1 , b j,i-1 ) 0, otherwise .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES-H",
"sec_num": "2.2"
},
{
"text": "We will show in the experimental results that increasing the memory of the algorithm by looking at b j,i\u22121 leads to a better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PROMODES-H",
"sec_num": "2.2"
},
{
"text": "In the Morpho Challenge 2009, PROMODES achieved competitive results on Finnish, Turkish, English and German -and scored highest on nonvowelized and vowelized Arabic compared to 9 other algorithms (Kurimo et al., 2009) . For the experiments described below, we chose the South African language Zulu since our research work mainly aims at creating morphological resources for under-resourced indigenous languages. Zulu is an agglutinative language with a complex morphology where multiple prefixes and suffixes contribute to a word's meaning. Nevertheless, it seems that segment boundaries are more likely in certain word positions. The PROMODES family harnesses this characteristic in combination with describing morphemes by letter transitions. From the Ukwabelana corpus (Spiegler et al., 2010b) we sampled 2500 Zulu words with a single segmentation each.",
"cite_spans": [
{
"start": 196,
"end": 217,
"text": "(Kurimo et al., 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "3"
},
{
"text": "In our first experiment we applied 10-fold crossvalidation on datasets ranging from 500 to 2500 words with the goal of measuring how the learning improves with increasing experience in terms of training set size. We want to remind the reader that our two algorithms are aimed at small datasets. We randomly split each dataset into 10 subsets where each subset was a test set and the corresponding 9 remaining sets were merged to a training set. We kept the labels of the training set to determine model parameters through maximum likelihood estimates and applied each model to the test set from which we had removed the answer keys. We compared results on the test set against the ground truth by counting true positive (TP), false positive (FP), true negative (TN) and false negative (FN) morpheme boundary predictions. Counts were summarised using precision 3 , recall 4 and f-measure 5 , as shown in Table 1 For PROMODES we can see in Table 1a that the precision increases slightly from 0.7127 to 0.7557 whereas the recall decreases from 0.3500 to 0.3045 going from dataset size 500 to 2500. This suggests that to some extent fewer morpheme boundaries are discovered but the ones which are found are more likely to be correct. We believe that this effect is caused by the limited memory of the model which uses order zero for the occurrence of a boundary and order one for letter transitions. It seems that the model gets quickly saturated in terms of incorporating new information and therefore precision and recall do not drastically change for increasing dataset sizes. In Table 1b we show results for PROMODES-H. Across the datasets precision stays comparatively constant around a mean of 0.6949 whereas the recall increases from 0.4938 to 0.5396. Compared to PROMODES we observe an increase in recall between 0.1438 and 0.2351 at a cost of a decrease in precision between 0.0144 and 0.0616.",
"cite_spans": [],
"ref_spans": [
{
"start": 903,
"end": 910,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 938,
"end": 946,
"text": "Table 1a",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Learning with increasing experience",
"sec_num": "3.1"
},
{
"text": "Since both algorithms show different behaviour with increasing experience and PROMODES-H yields a higher f-measure across all datasets, we will investigate in the next experiments how these differences manifest themselves at the boundary level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with increasing experience",
"sec_num": "3.1"
},
{
"text": "3 precision = T P T P+FP . 4 recall = T P T P+FN . 5 f -measure = ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning with increasing experience",
"sec_num": "3.1"
},
{
"text": "In the second experiment, we investigated which aspects of PROMODES-H in comparison to PRO-MODES led to the above described differences in performance. For this reason we broke down the summary measures of precision and recall into their original components: true/false positive (TP/FP) and negative (TN/FN) counts presented in the 2 \u00d7 2 contingency table of Figure 1 . For general evidence, we averaged across all experiments using relative frequencies. Note that the relative frequencies of positives (TP + FN) and negatives (TN + FP) each sum to one. The goal was to find out how predictions in each word position changed when applying PROMODES-H instead of PROMODES. This would show where the algorithms agree and where they disagree. PROMODES classifies nonboundaries in 0.9472 of the times correctly as TN and in 0.0528 of the times falsely as boundaries (FP). The algorithm correctly labels 0.3045 of the positions as boundaries (TP) and 0.6955 falsely as non-boundaries (FN). We can see that PROMODES follows a rather conservative approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 359,
"end": 367,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Position-wise comparison of algorithmic predictions",
"sec_num": "3.2"
},
{
"text": "When applying PROMODES-H, the majority of the FP's are turned into non-boundaries, however, a slightly higher number of previously correctly labelled non-boundaries are turned into false boundaries. The net change is a 0.0486 increase in FP's which is the reason for the decrease in precision. On the other side, more false non-boundaries (FN) are turned into boundaries than in the opposite direction with a net increase of 0.0819 of correct boundaries which led to the increased recall. Since the deduction of precision is less than the increase of recall, a better over-all performance of PROMODES-H is achieved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Position-wise comparison of algorithmic predictions",
"sec_num": "3.2"
},
{
"text": "In summary, PROMODES predicts more accurately non-boundaries whereas PROMODES-H is better at finding morpheme boundaries. So far we have based our decision for placing a boundary in a certain word position on Equation 2 and 4 assuming that P(b ji =1| . . .) > P(b ji =0| . . .) 6 gives the best result. However, if the underlying distribution for boundaries given the evidence is skewed, it might be possible to improve results by introducing a certain decision threshold for inserting morpheme boundaries. We will put this idea to the test in the following section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Position-wise comparison of algorithmic predictions",
"sec_num": "3.2"
},
{
"text": "For the third experiment we slightly changed our experimental setup. Instead of dividing datasets during 10-fold cross-validation into training and test subsets with the ratio of 9:1 we randomly split the data into training, validation and test sets with the ratio of 8:1:1. We then run our experiments and measured contingency table counts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calibration of the decision threshold",
"sec_num": "3.3"
},
{
"text": "Rather than placing a boundary if P(b ji =1| . . .) > P(b ji =0| . . .) which corresponds to P(b ji =1| . . .) > 0.50 we introduced a decision threshold P(b ji =1| . . .) > h with 0 \u2264 h \u2264 1. This is based on the assumption that the underlying distribution P(b ji | . . .) might be skewed and an optimal decision can be achieved at a different threshold. The optimal threshold was sought on the validation set and evaluated on the test set. An overview over the validation and test results is given in Table 2 . We want to point out that the threshold which yields the best f-measure result on the validation set returns almost the same result on the separate test set for both algorithms which suggests the existence of a general optimal threshold.",
"cite_spans": [],
"ref_spans": [
{
"start": 501,
"end": 508,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Calibration of the decision threshold",
"sec_num": "3.3"
},
{
"text": "Since this experiment provided us with a set of data points where the recall varied monotonically with the threshold and the precision changed accordingly, we reverted to precision-recall curves (PR curves) from machine learning. Following Davis and Goadrich (2006) the algorithmic perfor-mance can be analysed more informatively using these kinds of curves. The PR curve is plotted with recall on the x-axis and precision on the y-axis for increasing thresholds h. The PR curves for PRO-MODES and PROMODES-H are shown in Figure 2 on the validation set from which we learnt our optimal thresholds h * . Points were connected for readability only -points on the PR curve cannot be interpolated linearly.",
"cite_spans": [
{
"start": 240,
"end": 265,
"text": "Davis and Goadrich (2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 522,
"end": 531,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Calibration of the decision threshold",
"sec_num": "3.3"
},
{
"text": "In addition to the PR curves, we plotted isometrics for corresponding f-measure values which are defined as precision= f -measure\u2022recall 2recall\u2212 f -measure and are hyperboles. For increasing f-measure values the isometrics are moving further to the top-right corner of the plot. For a threshold of h = 0.50 (marked by '3') PROMODES-H has a better performance than PROMODES. Nevertheless, across the entire PR curve none of the algorithms dominates. One curve would dominate another if all data points of the dominated curve were beneath or equal to the dominating one. PROMODES has its optimal threshold at h * = 0.36 and PROMODES-H at h * = 0.37 where PROMODES has a slightly higher f-measure than PROMODES-H. The points of optimal f-measure performance are marked with ' ' on the PR curve. Summarizing, we have shown that both algorithms commit different errors at the word position level whereas PROMODES is better in predicting non-boundaries and PROMODES-H gives better results for morpheme boundaries at the default threshold of h = 0.50. In this section, we demonstrated that across different decision thresholds h for P(b ji =1| . . .) > h none of algorithms dominates the other one, and at the optimal threshold PROMODES achieves a slightly higher performance than PROMODES-H. The question which arises is whether we can combine PROMODES and PROMODES-H in an ensemble that leverages individual strengths of both. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calibration of the decision threshold",
"sec_num": "3.3"
},
{
"text": "A model ensemble is a set of individually trained classifiers whose predictions are combined when classifying new instances (Opitz and Maclin, 1999) . The idea is that by combining PROMODES and PROMODES-H, we would be able to avoid certain errors each model commits by consulting the other model as well. We introduce PROMODES-E as the ensemble of PROMODES and PROMODES-H. PROMODES-E accesses the individual probabilities Pr(b ji =1| . . . ) and simply averages them:",
"cite_spans": [
{
"start": 124,
"end": 148,
"text": "(Opitz and Maclin, 1999)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A model ensemble to leverage individual strengths",
"sec_num": "3.4"
},
{
"text": "Pr(b ji =1|t ji ) + Pr(b ji =1|t ji , b j,i-1 ,t j,i-1 ) 2 > h .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A model ensemble to leverage individual strengths",
"sec_num": "3.4"
},
{
"text": "As before, we used the default threshold h = 0.50 and found the calibrated threshold h * = 0.38, marked with '3' and ' ' in Figure 2 and shown in Table 3 . The calibrated threshold improves the f-measure over both PROMODES and PROMODES-H. The optimal solution applying h * = 0.38 is more balanced between precision and recall and boosted the original result by 0.1185 on the test set. Compared to its components PROMODES and PROMODES-H the f-measure increased by 0.0228 and 0.0353 on the test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 132,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 146,
"end": 153,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "A model ensemble to leverage individual strengths",
"sec_num": "3.4"
},
{
"text": "In short, we have shown that by combining PROMODES and PROMODES-H and finding the optimal threshold, the ensemble PROMODES-E gives better results than the individual models themselves and therefore manages to leverage the individual strengths of both to a certain extend. However, can we pinpoint the exact contribution of each individual algorithm to the improved result? We try to find an answer to this question in the analysis of the subsequent section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A model ensemble to leverage individual strengths",
"sec_num": "3.4"
},
{
"text": "For the entire dataset of 2500 words, we have examined boundary predictions dependent on the relative word position. In Figure 3 and 4 we have plotted the absolute counts of correct boundaries (TP) and non-boundaries (TN) which PROMODES predicted but not PROMODES-H, and vice versa, as continuous lines. We furthermore provided the number of individual predictions which were ultimately adopted by PROMODES-E in the ensemble as dashed lines.",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 128,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of calibrated algorithms and their model ensemble",
"sec_num": "3.5"
},
{
"text": "In Figure 3a we can see for the default threshold that PROMODES performs better in predicting non-boundaries in the middle and the end of the word in comparison to PROMODES-H. Figure 3b shows the statistics for correctly predicted boundaries. Here, PROMODES-H outperforms PRO-MODES in predicting correct boundaries across the entire word length. After the calibration, shown in Figure 4a , PROMODES-H improves the correct prediction of non-boundaries at the beginning of the word whereas PROMODES performs better at the end. For the boundary prediction in Figure 4b the signal disappears after calibration.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 12,
"text": "Figure 3a",
"ref_id": null
},
{
"start": 176,
"end": 185,
"text": "Figure 3b",
"ref_id": null
},
{
"start": 378,
"end": 387,
"text": "Figure 4a",
"ref_id": "FIGREF2"
},
{
"start": 556,
"end": 565,
"text": "Figure 4b",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Analysis of calibrated algorithms and their model ensemble",
"sec_num": "3.5"
},
{
"text": "Concluding, it appears that our test language Zulu has certain features which are modelled best with either a lower or higher-order model. Therefore, the ensemble leveraged strengths of both algorithms which led to a better overall performance with a calibrated threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of calibrated algorithms and their model ensemble",
"sec_num": "3.5"
},
{
"text": "We have presented two probabilistic generative models for word decomposition, PROMODES and PROMODES-H. Another generative model for morphological analysis has been described by Snover and Brent (2001) and Snover et al. (2002) , however, they were interested in finding paradigms as sets of mutual exclusive operations on a word form whereas we are describing a generative process using morpheme boundaries and resulting letter transitions.",
"cite_spans": [
{
"start": 177,
"end": 200,
"text": "Snover and Brent (2001)",
"ref_id": "BIBREF20"
},
{
"start": 205,
"end": 225,
"text": "Snover et al. (2002)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "4"
},
{
"text": "Moreover, our probabilistic models seem to resemble Hidden Markov Models (HMMs) by having certain states and transitions. The main difference is that we have dependencies between states as well as between emissions whereas in HMMs emissions only depend on the underlying state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "4"
},
{
"text": "Combining different morphological analysers has been performed, for example, by Atwell and Roberts (2006) and . Their approaches, though, used majority vote to decide whether a morpheme boundary is inserted in a certain word position or not. The algorithms themselves were treated as black-boxes. Monson et al. (2009) described an indirect approach to probabilistically combine ParaMor (Monson, 2008) and Morfessor (Creutz, 2006) . They used a natural language tagger which was trained on the output of ParaMor and Morfessor. The goal was to mimic each algorithm since ParaMor is rule-based and there is no access to Morfessor's internally used probabilities. The tagger would then return a probability for starting a new morpheme in a certain position based on the original algorithm. These probabilities in com-bination with a threshold, learnt on a different dataset, were used to merge word analyses. In contrast, our ensemble algorithm PROMODES-E directly accesses the probabilistic framework of each algorithm and combines them based on an optimal threshold learnt on a validation set.",
"cite_spans": [
{
"start": 80,
"end": 105,
"text": "Atwell and Roberts (2006)",
"ref_id": "BIBREF1"
},
{
"start": 297,
"end": 317,
"text": "Monson et al. (2009)",
"ref_id": "BIBREF12"
},
{
"start": 386,
"end": 400,
"text": "(Monson, 2008)",
"ref_id": "BIBREF13"
},
{
"start": 415,
"end": 429,
"text": "(Creutz, 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "4"
},
{
"text": "We have presented a method to learn a calibrated decision threshold from a validation set and demonstrated that ensemble methods in connection with calibrated decision thresholds can give better results than the individual models themselves. We introduced two algorithms for word decomposition which are based on generative probabilistic models. The models consider segment boundaries as hidden variables and include probabilities for letter transitions within segments. PROMODES contains a lower order model whereas PROMODES-H is a novel development of PRO-MODES with a higher order model. For both algorithms, we defined the mathematical model and performed experiments on language data of the morphologically complex language Zulu. We compared the performance on increasing training set sizes and analysed for each word position whether their boundary prediction agreed or disagreed. We found out that PROMODES was better in predicting non-boundaries and PROMODES-H gave better results for morpheme boundaries at a default decision threshold. At an optimal decision threshold, however, both yielded a similar f-measure result. We then performed a further analysis based on relative word positions and found out that the calibrated PROMODES-H predicted non-boundaries better for initial word positions whereas the calibrated PROMODES for midand final word positions. For boundaries, the calibrated algorithms had a similar behaviour. Subsequently, we showed that a model ensemble of both algorithms in conjunction with finding an optimal threshold exceeded the performance of the single algorithms at their individually optimal threshold. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Based on Equation 2 and 4 we use the notation P(b ji | . . .) if we do not want to specify the algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Narayanan Edakunni and Bruno Gol\u00e9nia for discussions concerning this paper as well as the anonymous reviewers for their comments. The research described was sponsored by EPSRC grant EP/E010857/1 Learning the morphology of complex synthetic languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Morphology in machine translation systems: Efficient integration of finite state transducers and feature structure descriptions",
"authors": [
{
"first": "J",
"middle": [
"W"
],
"last": "Amtrup",
"suffix": ""
}
],
"year": 2003,
"venue": "Machine Translation",
"volume": "18",
"issue": "3",
"pages": "217--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. W. Amtrup. 2003. Morphology in machine trans- lation systems: Efficient integration of finite state transducers and feature structure descriptions. Ma- chine Translation, 18(3):217-238.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Proceedings of the PASCAL Challenges Workshop on Unsupervised Segmentation of Words into Morphemes",
"authors": [
{
"first": "E",
"middle": [],
"last": "Atwell",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Roberts",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Atwell and A. Roberts. 2006. Combinatory hy- brid elementary analysis of text (CHEAT). Proceed- ings of the PASCAL Challenges Workshop on Un- supervised Segmentation of Words into Morphemes, Venice, Italy.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Induction of the Morphology of Natural Language: Unsupervised Morpheme Segmentation with Application to Automatic Speech Recognition",
"authors": [
{
"first": "M",
"middle": [],
"last": "Creutz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Creutz. 2006. Induction of the Morphology of Nat- ural Language: Unsupervised Morpheme Segmen- tation with Application to Automatic Speech Recog- nition. Ph.D. thesis, Helsinki University of Technol- ogy, Espoo, Finland.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The relationship between precision-recall and ROC curves",
"authors": [
{
"first": "J",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Goadrich",
"suffix": ""
}
],
"year": 2006,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "233--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Davis and M. Goadrich. 2006. The relationship between precision-recall and ROC curves. Interna- tional Conference on Machine Learning, Pittsburgh, PA, 233-240.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Modularity in a connectionist model of morphology acquisition",
"authors": [
{
"first": "M",
"middle": [],
"last": "Gasser",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "214--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Gasser. 1994. Modularity in a connectionist model of morphology acquisition. Proceedings of the 15th conference on Computational linguistics, 1:214-220.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unsupervised learning of the morphology of a natural language",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goldsmith",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "",
"pages": "153--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Goldsmith. 2001. Unsupervised learning of the mor- phology of a natural language. Computational Lin- guistics, 27:153-198.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The Handbook of Computational Linguistics, chapter Segmentation and morphology",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goldsmith",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Goldsmith. 2009. The Handbook of Computational Linguistics, chapter Segmentation and morphology. Blackwell.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Word segmentation by letter successor varieties",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Hafer",
"suffix": ""
},
{
"first": "S",
"middle": [
"F"
],
"last": "Weiss",
"suffix": ""
}
],
"year": 1974,
"venue": "Information Storage and Retrieval",
"volume": "10",
"issue": "",
"pages": "371--385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. A. Hafer and S. F. Weiss. 1974. Word segmenta- tion by letter successor varieties. Information Stor- age and Retrieval, 10:371-385.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "From phoneme to morpheme",
"authors": [
{
"first": "Z",
"middle": [
"S"
],
"last": "Harris",
"suffix": ""
}
],
"year": 1955,
"venue": "Language",
"volume": "31",
"issue": "2",
"pages": "190--222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. S. Harris. 1955. From phoneme to morpheme. Lan- guage, 31(2):190-222.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Unlimited vocabulary speech recognition with morph language models applied to Finnish",
"authors": [
{
"first": "T",
"middle": [],
"last": "Hirsimaki",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Siivola",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kurimo",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pylkkonen",
"suffix": ""
}
],
"year": 2006,
"venue": "Computer Speech And Language",
"volume": "20",
"issue": "4",
"pages": "515--541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Hirsimaki, M. Creutz, V. Siivola, M. Kurimo, S. Vir- pioja, and J. Pylkkonen. 2006. Unlimited vocabu- lary speech recognition with morph language mod- els applied to Finnish. Computer Speech And Lan- guage, 20(4):515-541.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Reductive and generative approaches to management of morphological variation of keywords in monolingual information retrieval: An overview",
"authors": [
{
"first": "K",
"middle": [],
"last": "Kettunen",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Documentation",
"volume": "65",
"issue": "",
"pages": "267--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Kettunen. 2009. Reductive and generative ap- proaches to management of morphological variation of keywords in monolingual information retrieval: An overview. Journal of Documentation, 65:267 - 290.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Overview and results of Morpho Challenge",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kurimo",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "V",
"middle": [
"T"
],
"last": "Turunen",
"suffix": ""
}
],
"year": 2009,
"venue": "Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Kurimo, S. Virpioja, and V. T. Turunen. 2009. Overview and results of Morpho Challenge 2009. Working notes for the CLEF 2009 Workshop, Corfu, Greece.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Probabilistic ParaMor. Working notes for the CLEF",
"authors": [
{
"first": "C",
"middle": [],
"last": "Monson",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Hollingshead",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2009,
"venue": "Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Monson, K. Hollingshead, and B. Roark. 2009. Probabilistic ParaMor. Working notes for the CLEF 2009 Workshop, Corfu, Greece.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "ParaMor: From Paradigm Structure To Natural Language Morphology Induction",
"authors": [
{
"first": "C",
"middle": [],
"last": "Monson",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Monson. 2008. ParaMor: From Paradigm Structure To Natural Language Morphology Induc- tion. Ph.D. thesis, Language Technologies Institute, School of Computer Science, Carnegie Mellon Uni- versity, Pittsburgh, PA, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning the past tense of English verbs using inductive logic programming. Symbolic, Connectionist, and Statistical Approaches to Learning for Natural Language Processing",
"authors": [
{
"first": "R",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
},
{
"first": "M",
"middle": [
"E"
],
"last": "Califf",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "370--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. J. Mooney and M. E. Califf. 1996. Learning the past tense of English verbs using inductive logic pro- gramming. Symbolic, Connectionist, and Statistical Approaches to Learning for Natural Language Pro- cessing, 370-384.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Analogical prediction",
"authors": [
{
"first": "S",
"middle": [],
"last": "Muggleton",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bain",
"suffix": ""
}
],
"year": 1999,
"venue": "Inductive Logic Programming: 9th International Workshop, ILP-99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Muggleton and M. Bain. 1999. Analogical predic- tion. Inductive Logic Programming: 9th Interna- tional Workshop, ILP-99, Bled, Slovenia, 234.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bootstrapping morphological analyzers by combining human elicitation and machine learning",
"authors": [
{
"first": "K",
"middle": [],
"last": "Oflazer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nirenburg",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mcshane",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational. Linguistics",
"volume": "27",
"issue": "1",
"pages": "59--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Oflazer, S. Nirenburg, and M. McShane. 2001. Bootstrapping morphological analyzers by combin- ing human elicitation and machine learning. Com- putational. Linguistics, 27(1):59-85.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Popular ensemble methods: An empirical study",
"authors": [
{
"first": "D",
"middle": [],
"last": "Opitz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Maclin",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of Artificial Intelligence Research",
"volume": "11",
"issue": "",
"pages": "169--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Opitz and R. Maclin. 1999. Popular ensemble methods: An empirical study. Journal of Artificial Intelligence Research, 11:169-198.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "On learning the past tenses of English verbs",
"authors": [
{
"first": "D",
"middle": [
"E"
],
"last": "Rumelhart",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Mcclelland",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. E. Rumelhart and J. L. McClelland. 1986. On learning the past tenses of English verbs. MIT Press, Cambridge, MA, USA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Towards learning morphology for under-resourced fusional and agglutinating languages",
"authors": [
{
"first": "K",
"middle": [],
"last": "Shalonova",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Gol\u00e9nia",
"suffix": ""
},
{
"first": "P",
"middle": [
"A"
],
"last": "Flach",
"suffix": ""
}
],
"year": 2009,
"venue": "Speech, and Language Processing",
"volume": "17",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Shalonova, B. Gol\u00e9nia, and P. A. Flach. 2009. To- wards learning morphology for under-resourced fu- sional and agglutinating languages. IEEE Transac- tions on Audio, Speech, and Language Processing, 17(5):956965.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A Bayesian model for morpheme and paradigm identification",
"authors": [
{
"first": "M",
"middle": [
"G"
],
"last": "Snover",
"suffix": ""
},
{
"first": "M",
"middle": [
"R"
],
"last": "Brent",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "490--498",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. G. Snover and M. R. Brent. 2001. A Bayesian model for morpheme and paradigm identification. Proceedings of the 39th Annual Meeting on Asso- ciation for Computational Linguistics, 490 -498.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Unsupervised learning of morphology using a novel directed search algorithm: Taking the first step",
"authors": [
{
"first": "M",
"middle": [
"G"
],
"last": "Snover",
"suffix": ""
},
{
"first": "G",
"middle": [
"E"
],
"last": "Jarosz",
"suffix": ""
},
{
"first": "M",
"middle": [
"R"
],
"last": "Brent",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 workshop on Morphological and phonological learning",
"volume": "6",
"issue": "",
"pages": "11--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. G. Snover, G. E. Jarosz, and M. R. Brent. 2002. Unsupervised learning of morphology using a novel directed search algorithm: Taking the first step. Pro- ceedings of the ACL-02 workshop on Morphological and phonological learning, 6:11-20.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning the morphology of Zulu with different degrees of supervision",
"authors": [
{
"first": "S",
"middle": [],
"last": "Spiegler",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Gol\u00e9nia",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Shalonova",
"suffix": ""
},
{
"first": "P",
"middle": [
"A"
],
"last": "Flach",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Tucker",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Spiegler, B. Gol\u00e9nia, K. Shalonova, P. A. Flach, and R. Tucker. 2008. Learning the morphology of Zulu with different degrees of supervision. IEEE Work- shop on Spoken Language Technology.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Promodes: A probabilistic generative model for word decomposition. Working Notes for the CLEF",
"authors": [
{
"first": "S",
"middle": [],
"last": "Spiegler",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Gol\u00e9nia",
"suffix": ""
},
{
"first": "P",
"middle": [
"A"
],
"last": "Flach",
"suffix": ""
}
],
"year": 2009,
"venue": "Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Spiegler, B. Gol\u00e9nia, and P. A. Flach. 2009. Pro- modes: A probabilistic generative model for word decomposition. Working Notes for the CLEF 2009 Workshop, Corfu, Greece.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Unsupervised word decomposition with the Promodes algorithm",
"authors": [
{
"first": "S",
"middle": [],
"last": "Spiegler",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Gol\u00e9nia",
"suffix": ""
},
{
"first": "P",
"middle": [
"A"
],
"last": "Flach",
"suffix": ""
}
],
"year": 2009,
"venue": "Multilingual Information Access Evaluation",
"volume": "I",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Spiegler, B. Gol\u00e9nia, and P. A. Flach. 2010a. Un- supervised word decomposition with the Promodes algorithm. In Multilingual Information Access Eval- uation Vol. I, CLEF 2009, Corfu, Greece, Lecture Notes in Computer Science, Springer.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Ukwabelana -An open-source morphological Zulu corpus",
"authors": [
{
"first": "S",
"middle": [],
"last": "Spiegler",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Spuy",
"suffix": ""
},
{
"first": "P",
"middle": [
"A"
],
"last": "Flach",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Spiegler, A. v. d. Spuy, and P. A. Flach. 2010b. Uk- wabelana -An open-source morphological Zulu cor- pus. in review.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Multilingual text analysis for text-tospeech synthesis",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 1996,
"venue": "Nat. Lang. Eng",
"volume": "2",
"issue": "4",
"pages": "369--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Sproat. 1996. Multilingual text analysis for text-to- speech synthesis. Nat. Lang. Eng., 2(4):369-380.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Precision-recall curves for algorithms on validation set.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "(unique TP) Promodes and Promodes\u2212E (unique TP) Promodes\u2212H and Promodes\u2212E (unique TP) (b) True positives, calibrated",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Analysis of results using calibrated threshold.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td>Data</td><td>Precision</td><td>Recall</td><td>F-measure</td></tr><tr><td colspan=\"3\">500 0.(a) PROMODES</td><td/></tr><tr><td>Data</td><td>Precision</td><td>Recall</td><td>F-measure</td></tr><tr><td colspan=\"4\">500 0.6983\u00b10.0511 0.4938\u00b10.0404 0.5776\u00b10.0395</td></tr><tr><td colspan=\"4\">1000 0.6865\u00b10.0298 0.5177\u00b10.0177 0.5901\u00b10.0205</td></tr><tr><td colspan=\"4\">1500 0.6952\u00b10.0308 0.5376\u00b10.0197 0.6058\u00b10.0173</td></tr><tr><td colspan=\"4\">2000 0.7008\u00b10.0140 0.5316\u00b10.0146 0.6044\u00b10.0110</td></tr><tr><td colspan=\"4\">2500 0.6941\u00b10.0184 0.5396\u00b10.0218 0.6068\u00b10.0151</td></tr><tr><td/><td colspan=\"2\">(b) PROMODES-H</td><td/></tr></table>",
"type_str": "table",
"num": null,
"text": ". 7127\u00b10.0418 0.3500\u00b10.0272 0.4687\u00b10.0284 1000 0.7435\u00b10.0556 0.3350\u00b10.0197 0.4614\u00b10.0250 1500 0.7460\u00b10.0529 0.3160\u00b10.0150 0.4435\u00b10.0206 2000 0.7504\u00b10.0235 0.3068\u00b10.0141 0.4354\u00b10.0168 2500 0.7557\u00b10.0356 0.3045\u00b10.0138 0.4337\u00b10.0163",
"html": null
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "",
"html": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "PROMODES and PROMODES-H on validation and test set.",
"html": null
},
"TABREF6": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "",
"html": null
}
}
}
}