| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:33:57.888774Z" |
| }, |
| "title": "Voted-Perceptron Approach for Kazakh Morphological Disambiguation", |
| "authors": [ |
| { |
| "first": "Gulmira", |
| "middle": [], |
| "last": "Tolegen", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "gulmira.tolegen.cs@gmail.com" |
| }, |
| { |
| "first": "Alymzhan", |
| "middle": [], |
| "last": "Toleu", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "alymzhan.toleu@gmail.com" |
| }, |
| { |
| "first": "Rustam", |
| "middle": [], |
| "last": "Mussabayev", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper presents an approach of voted perceptron for morphological disambiguation for the case of Kazakh language. Guided by the intuition that the feature value from the correct path of analyses must be higher than the feature value of non-correct path of analyses, we propose the voted perceptron algorithm with Viterbi decoding manner for disambiguation. The approach can use arbitrary features to learn the feature vector for a sequence of analyses, which plays a vital role for disambiguation. Experimental results show that our approach outperforms other statistical and rule-based models. Moreover, we manually annotated a new morphological disambiguation corpus for Kazakh language.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper presents an approach of voted perceptron for morphological disambiguation for the case of Kazakh language. Guided by the intuition that the feature value from the correct path of analyses must be higher than the feature value of non-correct path of analyses, we propose the voted perceptron algorithm with Viterbi decoding manner for disambiguation. The approach can use arbitrary features to learn the feature vector for a sequence of analyses, which plays a vital role for disambiguation. Experimental results show that our approach outperforms other statistical and rule-based models. Moreover, we manually annotated a new morphological disambiguation corpus for Kazakh language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Morphological analysis and disambiguation play a vital role in handling the problems of i) reducing the complexity of the word structures and ii) alleviating the data sparsity issue. A morphological analyzer can decompose any raw word into a sequence of morphological tags and it produces more than one analysis per word. An example is given in Table 2 , where a simple Kazakh phrase is analyzed and each word has more than one analysis. Morphological disambiguation (MD) is the task of selecting the correct analysis among the candidate analyses by leveraging the context information. Kazakh is an agglutinative language with rich morphology. A root/stem in Kazakh may produce hundreds or thousands of new words. It is apparent from below that Kazakh has large unique tokens than English, which leads to the data sparsity problem. Developing an accurate disambiguation approach is appealing because it can alleviate the data sparseness problem caused by rich morphology. Most researchers investigating Kazakh MD have utilised Hidden markov model (HMM) (Assylbekov et al., 2016; Makhambetov et al., 2015) as the statistical model. There are several problems with the use of this model: i) the strong assumption of HMM makes it not flexible to use arbitrary features; ii) the complexity of the task itself makes the model not tractable in practice when using a full analysis as labels (breaking-down an analysis into smaller units may work for this case, but the cost may be a loss of accuracy); iii) it cannot capture the longdistance dependency.", |
| "cite_spans": [ |
| { |
| "start": 1053, |
| "end": 1078, |
| "text": "(Assylbekov et al., 2016;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1079, |
| "end": 1104, |
| "text": "Makhambetov et al., 2015)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 345, |
| "end": 352, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In this paper, we present an approach of voted perceptron for MD with a new manually-annotated corpus. We treat an analysis kala n nom e cop aor p3 pl as a combination of three main constituents: root, POS, and morpheme chain:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "kala root n P OS", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "nom e cop aor p3 pl morpheme chain", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "(1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "As we can see that a full analyses have a complex structure, which means the model must correctly predict every single tag in these three parts including the root. The idea behind of these constituents is that we try to represent a sequence analysis with feature vectors. To learn the feature vectors for each sequence of analysis, we present a votedperceptron approach for MD. The underlying hypothesis is that we need to train the model as follows: the feature vector of the extracted sequence analysis in the correct path should have a large value than those in the non-correct path.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In order to improve the model's performance, we use a set of features and assess how these features affect the results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In the experiment, we try to evaluate how the breakingdown technique of analysis affects the model performance and evaluate what is the optimal length of morpheme chain (MC) for disambiguation. The proposed approaches do not need to specify the hand-rules (like constrained grammars (CGs) does), and the approach achieves better results than both the statistical and rule-based models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In general, the two tasks morphological disambiguation and morphological tagging are similar to each other. The difference between morphological disambiguation and the morphological tagging is that the latter one only makes prediction through the surface word and it is harder than MD. The former is able to access the possible candidates of analysis, which more designed to solve ambiguity of analysis produced by an analyzer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Several approaches have been applied for the morphological disambiguation and can be categorized as follows: rulebased, statistical-based model with discrete features, neural network-based and hybrid approaches. Makhambetov et al. (2015) presented a data-driven approach for Kazakh morphological analysis and disambiguation that was based Kala Kelbeti Zhana ... kala n nom kala n attr kala n nom e cop aor p3 pl kala n nom e cop aor p3 sg kal v iv prc impf kala v tv imp p2 sg kal vaux prc impf kelbet n px3sp nom kelbet n px3sp nom e cop aor p3 pl kelbet n px3sp nom e cop aor p3 sg zhana adj zhana adv zhana adj advl zhana adj subst nom ...", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 237, |
| "text": "Makhambetov et al. (2015)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Disambiguation", |
| "sec_num": "2.1." |
| }, |
| { |
| "text": "... In recent years, deep learning arguably achieved tremendous success in many research areas such as NLP (Tolegen et al., 2019; Mikolov et al., 2013; Toleu et al., 2017b; Dayanik et al., 2018; Toleu et al., 2019) , speech signal processing (Mamyrbayev et al., 2019) and computer vision (Girdhar et al., 2019; Pang et al., 2019) . Toleu et al. (2017a) presented a neural network-based disambiguation model, in which the author proposed to measure the distance of the two embeddings: the context and the morphological analyses. In order to measure the distances, the author applies neural networks to learn the context representation from characters and represents the morphological analyses as well. The correct analysis should more similar to the context's embedding; in other words, they are closely arranged in the vector space compared to the other candidate analyses.", |
| "cite_spans": [ |
| { |
| "start": 107, |
| "end": 129, |
| "text": "(Tolegen et al., 2019;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 130, |
| "end": 151, |
| "text": "Mikolov et al., 2013;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 152, |
| "end": 172, |
| "text": "Toleu et al., 2017b;", |
| "ref_id": null |
| }, |
| { |
| "start": 173, |
| "end": 194, |
| "text": "Dayanik et al., 2018;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 195, |
| "end": 214, |
| "text": "Toleu et al., 2019)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 242, |
| "end": 267, |
| "text": "(Mamyrbayev et al., 2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 288, |
| "end": 310, |
| "text": "(Girdhar et al., 2019;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 311, |
| "end": 329, |
| "text": "Pang et al., 2019)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 332, |
| "end": 352, |
| "text": "Toleu et al. (2017a)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Disambiguation", |
| "sec_num": "2.1." |
| }, |
| { |
| "text": "Morphological tagging has been studied extensively for the decade, here we review the work most relevant to this paper Mueller et al. (2013) presented a pruned CRF (PCRF) for morphological tagging and proposed to use coarse-to-fine decoding and early updating to train the higher-order CRF. Experiments on six languages show that the PCRF gives significant improvements in accuracy. M\u00fcller and Sch\u00fctze (2015) compared the performance of the most important representations that can be used for across-domain morphological tagging. One of their findings is that the representations similar to Brown clusters perform best for POS tagging and that word representations based on linguistic morphological analyzers perform best for morphological tagging. Malaviya et al. (2018) combines neural networks and graphical models presented a framework for cross-lingual morphological tagging. Instead of predicting full tag sets, the model predicts single tags separately and modeling the dependencies between tags over time steps. The model is able to generate tag sets unseen in training data, and share information between similar tag sets. This model is about cross-lingual MT and we do not make comparisons with monolingual morphological tagging models. Tkachenko and Sirts (2018) presented a sequence to sequence model for morphological tagging. The model learns the internal structure of morphological labels by treating them as sequences of morphological feature values and applies a similar strategy of neural sequence-to-sequence models commonly used for machine translation (Sutskever et al., 2014) to do morphological tagging. The authors explored different neural architectures and compare their performance with both PCRF (Mueller et al., 2013) . Double layer of biLSTMs were applied in those neural architectures as Encoder (Ling et al., 2015; Labeau et al., 2015; Ma and Hovy, 2016) . The encoder uses one biLSTM to compute character embedding and the second biLSTM combine the obtained character embedding along with pre-trained word embedding to generate word context embeddings. The output of those neural networks are different: one of the baselines is to use a single output layer to predict whole morphological labels. As the second baseline, the output layer can be changed to predict the different morphological value of tag with separate output layers. An improved version of the second one is to use a hierarchical separate output layers in order to capture dependencies between tags.", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 140, |
| "text": "Mueller et al. (2013)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 383, |
| "end": 408, |
| "text": "M\u00fcller and Sch\u00fctze (2015)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 749, |
| "end": 771, |
| "text": "Malaviya et al. (2018)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1247, |
| "end": 1273, |
| "text": "Tkachenko and Sirts (2018)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 1573, |
| "end": 1597, |
| "text": "(Sutskever et al., 2014)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 1724, |
| "end": 1746, |
| "text": "(Mueller et al., 2013)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1827, |
| "end": 1846, |
| "text": "(Ling et al., 2015;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1847, |
| "end": 1867, |
| "text": "Labeau et al., 2015;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1868, |
| "end": 1886, |
| "text": "Ma and Hovy, 2016)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Tagging", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "Let w = w 1 , w 2 , ..., w n be a sentence of length n words and t = t 1 , t 2 , ...t n be corresponding morphological analysis sequence. c = (t c1 1 , t cm 1 ), ..., (t c1 n , t cm n ) is the candidate analysis of each word. m is the number of candidates, and it can be vary to each word. Morphological disambiguation is the problem of finding the t from c given the w: where P r(w) is a constant and can be ignored. To compute P r(t) and P r(w|t), the first-order HMM assumptions are applied to simplify the analysis transition probability into that the current analysis depends only on previous analysis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HMM-based Disambiguation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P r(t) = n i=1 P r(t i |t i\u22121 ) = n i=1 \u03b1 + c(t i , t i\u22121 ) \u03b1|T | + c(t i\u22121 )", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "HMM-based Disambiguation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "# Features Description 0 w i word context 1 r i lemma/stem of the word 2 P OS i POS tags, such as noun, verb etc. 3 mc i full morpheme chain 4 ma i a full morphological analysis 5 #t the number of the tags 6 wc i word case 7 ps i plural and singular tags where c(t i , t i\u22121 ) counts the number of occurrences of t i , t i\u22121 in the corpus. \u03b1 is smoothing number. T the unique number of tags in the corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HMM-based Disambiguation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P r(w|t) = n i=1 P r(w i |t i ) = n i=1 \u03b1 + c(w i , t i ) \u03b1|V | + c(t i )", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "HMM-based Disambiguation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "where c(w i , t i ) is the number of occurrences of word w i with tag t i . |V | is the unique word number. Using above transition and emission scores, we could apply Viterbi decoding to find the best path of analysis. In practice, there are several drawbacks of above approach when applying it on disambiguation task directly: i) if we consider each full analysis as a tag, the unique number of tag become 19,236 (observed in our corpus), then the number of parameters of transition probability will be 19, 236 2 , and the number of parameters of emission probability become even more. ii) breaking down analysis into small subtags can definitely decrease the number of tags and it is tractable, but it has an effect on model performance. iii) first-order HMM not able to capture the long-term dependency information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "HMM-based Disambiguation", |
| "sec_num": "3." |
| }, |
| { |
| "text": "In this section, we describe the voted perceptron-based approach for disambiguation. A major advantage of this approach is that it allows us to use arbitrary features and extracts features from both words and the candidate analyses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Voted Perceptron-based Disambiguation", |
| "sec_num": "4." |
| }, |
| { |
| "text": "In order to generalize the morphological analyses and to train the perceptron algorithm, we use a set of features to generate feature vector as representation of the analyses using global feature function \u03a6(\u2022). Table 3 summarizes the feature categories. Let \u03c6(\u2022) function be the local feature function which is indicator function, it maps the input to an d-dimentional feature vectors. For example, if the template only contains these two: 1) w 0 /w \u22121 ; 2)P OS \u22121 /P OS 0 /P OS 1 . w i denotes for word in the position i-th, and P OS i is part-of-speech. \u03c6(\u2022) extracts the current and previous word with previous, current and next word POS tag as local features through this template at each step to make the disambiguation. The global feature representation is the sum of all local features for input sequence:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 211, |
| "end": 218, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Feature Vectors", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03a6(\u2022) = n \u03c6(\u2022)", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Feature Vectors", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "To estimate the parameters of the model, we apply the perceptron training algorithms (Collins, 2002) shown in Figure 1 . z i \u2208 z is a predicted path and \u03a6(\u2022) is global feature function that generate features. As we can see, it increases the parameter values for features which are extracted from correct morphological analysis's sequence, and down weighting parameter values for features extracted in the noncorrect morphological analysis's sequence. The final analyses path is decoded through the Viterbi algorithm. Data: Training examples (w i , t i ).", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 100, |
| "text": "(Collins, 2002)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 110, |
| "end": 119, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parameter Estimation", |
| "sec_num": "4.2." |
| }, |
| { |
| "text": "Result: Parameters a.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter Estimation", |
| "sec_num": "4.2." |
| }, |
| { |
| "text": "Initialization: set parameters a = 0; for e \u2190 1 to Epoch do for i \u2190 1 to n do Calculate", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter Estimation", |
| "sec_num": "4.2." |
| }, |
| { |
| "text": "z i = argmax z\u2208GEN (xi) \u03a6(x i , z) \u2022 a if z i = t i then a = a + \u03a6(x i , t i ) \u2212 \u03a6(x i , z i ) end end end", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter Estimation", |
| "sec_num": "4.2." |
| }, |
| { |
| "text": "Algorithm 1: Voted-Perceptron algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter Estimation", |
| "sec_num": "4.2." |
| }, |
| { |
| "text": "One of the aims of this work is to create a manually annotated morphological disambiguation data set as the database for future further research. As known that the task of annotating data is a time-consuming and tedious work. In order to assist the annotation process and to improve the correctness of the annotation, we build an annotation tool with user-friendly interface. Figure 1 shows a screenshot of an annotation process. The annotation process is not trivial and slow, the annotator should annotate every single word appeared in the document. We can briefly illustrate the annotation process as follows: click a word, then the corresponding candidate analyses will show up for annotation; the annotator not only considers the context of that word but also consider the previous/future words' morphological tags to make the decision. We randomly selected 110 documents from the general news media 1 as the data source for annotation. The annotations have been executed manually by native speaker of Kazakh. The proposed approaches were evaluated on the new morphological disambiguation data set. The corpus consists of 15,466 tokens, and 90% is used as the training set, 10% for the test set. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 376, |
| "end": 384, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Corpus Construction", |
| "sec_num": "5.1." |
| }, |
| { |
| "text": "There are 19,236 analysis 2 can be observed in our current corpus, which results in that the number of model's parameters be very large and sparse when training the HMM. In practice, the memory overflow error was raised when we directly use the full analyses as labels. It worth to note that the number of analyses increases with the increase of the corpus volume because each root can produce hundred or thousand new words in Kazakh. To reduce the tag number, we tried to break-down the analysis into smaller units excluding the roots. Because there are 4,050 roots in the corpus, and if we only take the last one tag of the morpheme chain with root as an analysis, the number is still large about 11,167. Table 3 with suitable feature template.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 707, |
| "end": 714, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model Setup", |
| "sec_num": "5.2." |
| }, |
| { |
| "text": "We report the accuracy results for the overall (all tokens), known tokens, and unknown tokens. In terms of strictness, we deem correct only the predictions that match the golden truth completely, i.e. in root, POS and MC (up to a single morpheme tag). Unless stated otherwise we refer to the overall accuracy when comparing model performances. Table 6 shows the results of HMM models with different tag sets. The top half of the Table 6 shows that the performances are relatively low when the tag from the morpheme chain was used only. The accuracy of models (M-1, M-2, M-3) use the last 1, 2 and 3 tags of morpheme chain are 72.04%, 78.72% and 78.47% respectively. Model M-4 uses full morpheme chain and gives 78.60%. To investigate this further, we take the last 4 tags of morpheme chain as labels and found that the result is 78.6% same with M-4's.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 344, |
| "end": 351, |
| "text": "Table 6", |
| "ref_id": "TABREF10" |
| }, |
| { |
| "start": 429, |
| "end": 436, |
| "text": "Table 6", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.3." |
| }, |
| { |
| "text": "The bottom half of the Table 6 shows the results of all models trained with POS+certain morpheme tags. It is apparent from this table that the models M-6 (POS + last 2 tags) and M-7 (POS + last 3 tags) give same accuracy 84.91% which is the best result among them. No significant differences were found between M-7 and M-8 and the latter one was trained with POS + full morpheme chain, and M-8 shows a little drop over M-7 in terms of unknown tokens accuracy. According to these results, we found that the HMM model achieves the best result when using POS+the last 2/3 morpheme tags. There is no significant improvement when increasing the length of the morpheme tags. Table 7 presents the results for the voted-perceptron approach and it shows how each feature affects the model. A plus (+) sign before the feature name indicates that these feature combinations are added on top of the rest with suitable feature templates. It can be seen that the proposed model enhanced with the word context (+w) only gives 53.68% accuracy. When adding the root feature (+r) to the model (trained with +w and +r features), the model performance can be improved to 62.09%, which means that the root feature contributes around 8% improvements. It is apparent that the pos feature (+pos) contributes 8.47%, and the model achieves 70.56% accuracy. As we expected, the morpheme chain features (+mc) contributes most to model performance. It gives 18.55% improvement over the accumulation of previous features and the model ends up with 89.11% overall accuracy. The feature of full morphological analysis (+ma) only gives a minor improvement. Other features like +t, +wc, and +ps provide positive effect and finally the proposed approach achieves 90.53% overall accuracy. Table 8 compares the best results obtained from the HMMs and the voted-perceptron. It is apparent from this table that the proposed approach outperforms than HMM by 5.62% overall accuracy, and 10.69% unknown tokens accuracy.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 23, |
| "end": 30, |
| "text": "Table 6", |
| "ref_id": "TABREF10" |
| }, |
| { |
| "start": 669, |
| "end": 676, |
| "text": "Table 7", |
| "ref_id": "TABREF11" |
| }, |
| { |
| "start": 1753, |
| "end": 1760, |
| "text": "Table 8", |
| "ref_id": "TABREF12" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.3." |
| }, |
| { |
| "text": "To compare the proposed approach with previous work, we take the two existing models as baselines: i) a statistical Table 9 : Comparison of the best results from HMM-based models and those of voted-perceptron. model proposed by Assylbekov et al. (2016) 3 and ii) a rulebased constrained grammar tool from the Apertium-kaz CG tagger 4 . These tools cannot applied to our data set directly 5 , instead of converting the tools, we evaluate all models on their data set (Assylbekov et al., 2016) , and the proposed voted-perceptron was trained on this data with the corresponding features. Since voted-perceptron is a purely statistical model, for the fair comparison, we use the baseline of Assylbekov et al. (2016) of their statistical model based on HMM not the combined model of HMM with CG. Table 9 shows the comparison results. It is can be seen that voted-perceptron model outperforms the HMM-based disambiguation and also beats the constrained grammar (CG), the rule-based disambiguaton tool.", |
| "cite_spans": [ |
| { |
| "start": 228, |
| "end": 252, |
| "text": "Assylbekov et al. (2016)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 466, |
| "end": 491, |
| "text": "(Assylbekov et al., 2016)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 688, |
| "end": 712, |
| "text": "Assylbekov et al. (2016)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 116, |
| "end": 123, |
| "text": "Table 9", |
| "ref_id": null |
| }, |
| { |
| "start": 792, |
| "end": 799, |
| "text": "Table 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.3." |
| }, |
| { |
| "text": "We categorize the errors of model output into three groups: root inconsistency, POS inconsistency and the morpheme chain inconsistency. Table 10 shows error percentages. It can be seen that in models M-1 to M-4 only trained with different length of morpheme chain, the root inconsistency error takes almost half of the total. The POS inconsistency error takes around 25%. After adding the POS to the models (M-5 to M-8), the root and POS inconsistency percentages decreased to around 44% and 20% respectively. It is apparent from this table that for the best HMM-based model, the root inconsistency error accounts for the large part of errors (44.85%). It is reasonable because these models did not include root as a label in training. Table 11 : Percentage of root, POS and morpheme chain errors for voted perceptron.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 136, |
| "end": 144, |
| "text": "Table 10", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 736, |
| "end": 744, |
| "text": "Table 11", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "5.4." |
| }, |
| { |
| "text": "Further analysing the output of the models, we found that the models tend to make error prediction for the possessive tags with 3-rd person and tags of tense. Because the possessive tags have the attribute of plural or singular, and these attributes can be determined only after the subject is captured. If there are many words between the subject and the current word with possessive tags or the subject is in hidden form, then the former involves the long-distance dependency problem, and the latter requires the model need certain semantic information of sentences that reflects the subject. Similarly, the corresponding tense tag in also involved to define the sentence tense before tagging the corresponding word with a tense tag. For example, in Kazakh, a verb surface word can have a future or current tense tag simultaneously, and the disambiguation can be done when the sentence tense is determined. As the HMM-based model is the first-order model, these errors cannot be solved. In voted-perceptron, we apply the [-2,-2] window to extract features, and can partially solve the long-distance problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "5.4." |
| }, |
| { |
| "text": "In this paper, we represent an approach of voted perceptron for morphological disambiguation for the case of Kazakh language. The approach can use arbitrary features in training and testing and can also apply to other languages easily. A new manually annotated corpus for Kazakh morphological disambiguation is presented in this paper for the further research. Experimental results show that voted perceptron outperforms the frequently used HMM-based and the rulebased constrained grammar. One possible future work is to perform transfer learning by using the learned feature vector of this approach for the typologically similar languages. Solving the long-distance dependency problem of morphological disambiguation is the another prior future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6." |
| }, |
| { |
| "text": "https://www.inform.kz/kz", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Each morphological analysis consists of root, POS, and morpheme chain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://svn.code.sf.net/p/apertium/svn/branches/kaz-tagger/ 4 http://wiki.apertium.org/wiki/Apertium-kaz5 We used our new developed morphological analyzer to decompose the words, and it has some issue of inconsistency of the name of morphological tags with their analyzer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This research has been conducted within the framework of the grant num. BR05236839 \"Development of information technologies and systems for stimulation of personalities sustainable development as one of the bases of development of digital Kazakhstan\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A free/open-source hybrid morphological disambiguation tool for kazakh", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Assylbekov", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Washington", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Tyers", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Nurkas", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Sundetova", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Karibayeva", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Abduali", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Amirova", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "The First International Conference on Turkic Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Assylbekov, Z., Washington, J., Tyers, F., Nurkas, A., Sun- detova, A., Karibayeva, A., Abduali, B., and Amirova, D. (2016). A free/open-source hybrid morphological disambiguation tool for kazakh. The First International Conference on Turkic Computational Linguistics, Tur- CLing 2016 ; Conference date: 02-04-2016 Through 08- 04-2016.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Collins, M. (2002). Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. EMNLP.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Morphnet: A sequence-to-sequence model that combines morphological analysis and disambiguation", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Dayanik", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Aky\u00fcrek", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuret", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dayanik, E., Aky\u00fcrek, E., and Yuret, D. (2018). Mor- phnet: A sequence-to-sequence model that combines morphological analysis and disambiguation. CoRR, abs/1805.07946.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Video action transformer network", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Girdhar", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Carreira", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Doersch", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Zisserman", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Girdhar, R., Carreira, J., Doersch, C., and Zisserman, A. (2019). Video action transformer network. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "A rule based morphological analyzer and a morphological disambiguator for kazakh language", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Kessikbayeva", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Cicekli", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Linguistics and Literature Studies", |
| "volume": "4", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kessikbayeva, G. and Cicekli, I. (2016). A rule based mor- phological analyzer and a morphological disambiguator for kazakh language. Linguistics and Literature Studies, 4:96-104, 01.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Nonlexical neural architecture for fine-grained POS tagging", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Labeau", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "L\u00f6ser", |
| "suffix": "" |
| }, |
| { |
| "first": "Allauzen", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "232--237", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Labeau, M., L\u00f6ser, K., and Allauzen, A. (2015). Non- lexical neural architecture for fine-grained POS tagging. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 232- 237, Lisbon, Portugal, September. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Finding function in form: Compositional character models for open vocabulary word representation", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "W" |
| ], |
| "last": "Black", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Trancoso", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Fermandez", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Amir", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Marujo", |
| "suffix": "" |
| }, |
| { |
| "first": "Luis", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1520--1530", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ling, W., Dyer, C., Black, A. W., Trancoso, I., Fermandez, R., Amir, S., Marujo, L., and Luis, T. (2015). Finding function in form: Compositional character models for open vocabulary word representation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1520-1530, Lisbon, Portu- gal, September. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1064--1074", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ma, X. and Hovy, E. (2016). End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 1064- 1074, Berlin, Germany, August. Association for Compu- tational Linguistics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Data-driven morphological analysis and disambiguation for kazakh", |
| "authors": [ |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Makhambetov", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Makazhanov", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Sabyrgaliyev", |
| "suffix": "" |
| }, |
| { |
| "first": "Yessenbayev", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Computational Linguistics and Intelligent Text Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "151--163", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Makhambetov, O., Makazhanov, A., Sabyrgaliyev, I., and Yessenbayev, Z. (2015). Data-driven morphological analysis and disambiguation for kazakh. In Alexander Gelbukh, editor, Computational Linguistics and Intelli- gent Text Processing, pages 151-163, Cham. Springer International Publishing.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Neural factor graph models for cross-lingual morphological tagging", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Malaviya", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "R" |
| ], |
| "last": "Gormley", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Neubig", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Malaviya, C., Gormley, M. R., and Neubig, G. (2018). Neural factor graph models for cross-lingual morpholog- ical tagging. In ACL.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Continuous speech recognition of kazakh language", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mamyrbayev", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Turdalyuly", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Mekebayev", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Mukhsina", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Alimhan", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Babaali", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Nabieva", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Duisenbayeva", |
| "suffix": "" |
| }, |
| { |
| "first": "Akhmetov", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "ITM Web of Conferences", |
| "volume": "24", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mamyrbayev, , Turdalyuly, M., Mekebayev, N., Mukhsina, K., Alimhan, K., BabaAli, B., Nabieva, G., Duisen- bayeva, A., and Akhmetov, B. (2019). Continuous speech recognition of kazakh language. ITM Web of Conferences, 24:01012, 01.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems", |
| "volume": "2", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Infor- mation Processing Systems -Volume 2, NIPS'13, pages 3111-3119, USA. Curran Associates Inc.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Efficient higher-order CRFs for morphological tagging", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Mueller", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Schmid", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "322--332", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mueller, T., Schmid, H., and Sch\u00fctze, H. (2013). Efficient higher-order CRFs for morphological tagging. In Pro- ceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 322-332, Seat- tle, Washington, USA, October. Association for Compu- tational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Robust morphological tagging with word representations", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "M\u00fcller", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M\u00fcller, T. and Sch\u00fctze, H. (2015). Robust morphological tagging with word representations. In HLT-NAACL.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Deep rnn framework for visual sequential applications", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Zha", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Cao", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pang, B., Zha, K., Cao, H., Shi, C., and Lu, C. (2019). Deep rnn framework for visual sequential applications. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Sequence to sequence learning with neural networks", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [ |
| "V" |
| ], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "27", |
| "issue": "", |
| "pages": "3104--3112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Z. Ghahra- mani, et al., editors, Advances in Neural Information Processing Systems 27, pages 3104-3112. Curran Asso- ciates, Inc.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Modeling composite labels for neural morphological tagging", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Tkachenko", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Sirts", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "368--379", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tkachenko, A. and Sirts, K. (2018). Modeling composite labels for neural morphological tagging. In Proceedings of the 22nd Conference on Computational Natural Lan- guage Learning, pages 368-379, Brussels, Belgium, Oc- tober. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Neural named entity recognition for kazakh", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Tolegen", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Toleu", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Mamyrbayev", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mussabayev", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 20th International Conference on Computational Linguistics and Intelligent Text Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tolegen, G., Toleu, A., Mamyrbayev, O., and Mussabayev, R. (2019). Neural named entity recognition for kazakh. In Proceedings of the 20th International Conference on Computational Linguistics and Intelligent Text Process- ing, CICLing. Springer Lecture Notes in Computer Sci- ence.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Character-aware neural morphological disambiguation", |
| "authors": [], |
| "year": null, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "666--671", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Character-aware neural morphological disambiguation. In Proceedings of the 55th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 2: Short Papers), pages 666-671, Vancouver, Canada, July. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Character-based deep learning models for token and sentence segmentation", |
| "authors": [], |
| "year": 2017, |
| "venue": "Proceedings of the 5th International Conference on Turkic Languages Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Character-based deep learning models for token and sen- tence segmentation. In Proceedings of the 5th Inter- national Conference on Turkic Languages Processing (TurkLang 2017), Kazan, Tatarstan, Russian Federation, October.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Keyvector: Unsupervised keyphrase extraction using weighted topic via semantic relatedness", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Toleu", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Tolegen", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mussabayev", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "23", |
| "issue": "", |
| "pages": "861--869", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Toleu, A., Tolegen, G., and Mussabayev, R. (2019). Keyvector: Unsupervised keyphrase extraction using weighted topic via semantic relatedness. volume 23, page 861-869. 10.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "type_str": "figure", |
| "text": "Annotation tool for manually annotating the morphological disambiguation corpus.", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF0": { |
| "text": "Corpus sizeKaz uni. tok. Eng uni. tok.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>948,612 (News)</td><td>91,495</td><td>57,017</td></tr><tr><td>25,327,611 (Wikipedia)</td><td>873,693</td><td>427,980</td></tr></table>" |
| }, |
| "TABREF1": { |
| "text": "Comparison of Kazakh and English corpora. uni. tok. denotes the number of unique tokens.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF2": { |
| "text": "", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>: Example of morphological analysis for a Kazakh sentence: Kala kelbeti Zhana (the appearance of the city is</td></tr><tr><td>new).</td></tr><tr><td>on the Hidden Markov Model (HMM). The authors con-</td></tr><tr><td>ducted 10 cross-validated evaluations and obtained 86% ac-</td></tr><tr><td>curacy on the test data. Kessikbayeva and Cicekli (2016)</td></tr><tr><td>presented a rule-based morphological disambiguator for</td></tr><tr><td>Kazakh language and it achieved 87% accuracy on the test</td></tr><tr><td>data (about 15,000 words). In the same direction, Assyl-</td></tr><tr><td>bekov et al. (2016) presented a hybrid approach that ap-</td></tr><tr><td>plied constrained grammar (CG) with HMM tagger. The</td></tr><tr><td>authors reported that the HMM tagger achieved 84.55% ac-</td></tr><tr><td>curacy and the hybrid approach achieved 90.73% accuracy</td></tr><tr><td>on the test data.</td></tr></table>" |
| }, |
| "TABREF3": { |
| "text": "Feature category.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF4": { |
| "text": "", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>shows the statistics about</td></tr></table>" |
| }, |
| "TABREF5": { |
| "text": "", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF7": { |
| "text": "Various HMM-based models with their tag numbers for disambiguation after breaking down the analysis into a small number of morphological units.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF8": { |
| "text": "", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>provides the summary statistics for HMM model's</td></tr><tr><td>variations (denoted from M-1 to M-8), which are trained</td></tr><tr><td>with different morphological units. For instance, model</td></tr><tr><td>M-1 uses the last single tag of morpheme chain as label</td></tr><tr><td>to train, and the total number of labels is 57. Models de-</td></tr><tr><td>noted with M-5 to M-8 include the POS combined with</td></tr><tr><td>thee morpheme tags. We found the max-length and the</td></tr><tr><td>average length of the morpheme chain in the corpus are 7</td></tr><tr><td>and 3.07, respectively. The idea behind such a model setup</td></tr></table>" |
| }, |
| "TABREF9": { |
| "text": "Models Overall acc. Known acc. Unk. acc.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>M-1</td><td>72.04</td><td>77.25</td><td>57.24</td></tr><tr><td>M-2</td><td>78.72</td><td>82.19</td><td>68.88</td></tr><tr><td>M-3</td><td>78.47</td><td>82.44</td><td>67.22</td></tr><tr><td>M-4</td><td>78.60</td><td>82.52</td><td>67.45</td></tr><tr><td>M-5</td><td>79.96</td><td>84.86</td><td>66.03</td></tr><tr><td>M-6</td><td>84.91</td><td>89.54</td><td>71.73</td></tr><tr><td>M-7</td><td>84.91</td><td>89.54</td><td>71.73</td></tr><tr><td>M-8</td><td>84.78</td><td>89.54</td><td>71.25</td></tr></table>" |
| }, |
| "TABREF10": { |
| "text": "Accuracy results of HMM-based models. Unk. acc. denotes for unknown tokens accuracy.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td colspan=\"5\"># Models Overall acc. Known acc. Unk. acc.</td></tr><tr><td>0</td><td>+w</td><td>53.68</td><td>55.51</td><td>48.45</td></tr><tr><td>1</td><td>+r</td><td>62.09</td><td>65.21</td><td>53.21</td></tr><tr><td>2</td><td>+pos</td><td>70.56</td><td>75</td><td>57.95</td></tr><tr><td>3</td><td>+mc</td><td>89.11</td><td>92.39</td><td>78.81</td></tr><tr><td>4</td><td>+ma</td><td>89.54</td><td>92.47</td><td>81.23</td></tr><tr><td>5</td><td>+#t</td><td>89.17</td><td>92.47</td><td>79.81</td></tr><tr><td>6</td><td>+wc</td><td>90.23</td><td>93.39</td><td>81.23</td></tr><tr><td>7</td><td>+ps</td><td>90.53</td><td>93.39</td><td>82.42</td></tr></table>" |
| }, |
| "TABREF11": { |
| "text": "Accuracy results of the voted-perceptron approach. +mc indicates that the current feature with its feature combinations is added to the model with previous features +w, +r, +pos accumulatively.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>Models</td><td colspan=\"3\">Overall acc. Known acc. Unk. acc.</td></tr><tr><td>HMM</td><td>84.91</td><td>89.54</td><td>71.73</td></tr><tr><td>Voted-Perceptron</td><td>90.53</td><td>93.39</td><td>82.42</td></tr><tr><td>Improv.</td><td>5.62</td><td>3.85</td><td>10.69</td></tr></table>" |
| }, |
| "TABREF12": { |
| "text": "Comparison of the best results from HMM-based models and the voted-perceptron.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF15": { |
| "text": "Percentage of root, POS and morpheme chain errors for HMM-based models.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF16": { |
| "text": "shows the error' percentages for voted perceptron.It can be seen that the different features affect the model's output error percentage for voted perceptron. The error percentage of the final model for root, POS and morpheme chain inconsistency are 25.49%, 17.64% and 56.86%. It is apparent that a very large portion of error is accounted for morpheme chain inconsistency.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>Models</td><td>root</td><td>POS</td><td>mc</td></tr><tr><td>+w</td><td colspan=\"3\">37.65 21.76 40.58</td></tr><tr><td>+r</td><td colspan=\"3\">11.58 28.87 59.54</td></tr><tr><td>+pos</td><td colspan=\"3\">11.34 8.19 80.46</td></tr><tr><td>+mc</td><td colspan=\"3\">30.11 18.75 51.13</td></tr><tr><td>+ma</td><td colspan=\"3\">30.17 22.48 47.33</td></tr><tr><td>+#t</td><td>25.14</td><td>24</td><td>50.85</td></tr><tr><td>+wc</td><td colspan=\"3\">27.21 18.98 53.79</td></tr><tr><td>+ps</td><td colspan=\"3\">25.49 17.64 56.86</td></tr></table>" |
| } |
| } |
| } |
| } |