ACL-OCL / Base_JSON /prefixR /json /R19 /R19-1007.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R19-1007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:03:15.861079Z"
},
"title": "Supervised Morphological Segmentation Using Rich Annotated Lexicon",
"authors": [
{
"first": "Ebrahim",
"middle": [],
"last": "Ansari",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Charles University",
"location": {}
},
"email": "ansari@iasbs.ac.ir"
},
{
"first": "\u2021",
"middle": [],
"last": "Zden\u011bk\u017eabokrtsk\u00fd",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Charles University",
"location": {}
},
"email": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Mahmoudi",
"suffix": "",
"affiliation": {},
"email": "m.mahmodi@iasbs.ac.ir"
},
{
"first": "Hamid",
"middle": [],
"last": "Haghdoost",
"suffix": "",
"affiliation": {},
"email": "hamid.h@iasbs.ac.ir"
},
{
"first": "Jon\u00e1\u0161",
"middle": [],
"last": "Vidra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Charles University",
"location": {}
},
"email": "vidra@ufal.mff.cuni.cz"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Morphological segmentation of words is the process of dividing a word into smaller units called morphemes; it is tricky especially when a morphologically rich or polysynthetic language is under question. In this work, we designed and evaluated several Recurrent Neural Network (RNN) based models as well as various other machine learning based approaches for the morphological segmentation task. We trained our models using annotated segmentation lexicons. To evaluate the effect of the training data size on our models, we decided to create a large hand-annotated morphologically segmented corpus of Persian words, which is, to the best of our knowledge, the first and the only segmentation lexicon for the Persian language. In the experimental phase, using the hand-annotated Persian lexicon and two smaller similar lexicons for Czech and Finnish languages, we evaluated the effect of the training data size, different hyperparameters settings as well as different RNN-based models.",
"pdf_parse": {
"paper_id": "R19-1007",
"_pdf_hash": "",
"abstract": [
{
"text": "Morphological segmentation of words is the process of dividing a word into smaller units called morphemes; it is tricky especially when a morphologically rich or polysynthetic language is under question. In this work, we designed and evaluated several Recurrent Neural Network (RNN) based models as well as various other machine learning based approaches for the morphological segmentation task. We trained our models using annotated segmentation lexicons. To evaluate the effect of the training data size on our models, we decided to create a large hand-annotated morphologically segmented corpus of Persian words, which is, to the best of our knowledge, the first and the only segmentation lexicon for the Persian language. In the experimental phase, using the hand-annotated Persian lexicon and two smaller similar lexicons for Czech and Finnish languages, we evaluated the effect of the training data size, different hyperparameters settings as well as different RNN-based models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Morphological analysis must be tackled somehow in all natural language processing tasks, such as machine translation, speech recognition, and information retrieval. Morphological segmentation of words is the process of dividing a word into smaller units called morphemes. Morphological segmentation task is harder for those languages which are morphologically rich and complex like Persian, Arabic, Czech, Finnish or Turkish, especially when there are not enough annotated data for those languages. In this paper, we designed and evaluated various supervised setups to perform morphological segmentation using a handannotated segmented lexicon for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The efficiency of supervised approaches (especially of deep neural network models) is naturally highly dependent on the size of training data. In order to evaluate the effect of the training data size on our segmentation models, we created a rich Persian hand-annotated segmentation lexicon, which is, as far as we know, the first and the only such computer-readable dataset for Persian. Persian (Farsi) is one of the languages of the Indo-European language family within the Indo-Iranian branch and is spoken in Iran, Afghanistan, Tajikistan and some other regions related to ancient Persian. In addition, we evaluated our models on Czech and Finnish, however, the amount of annotated data for them is substantially lower.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Automatic morphological segmentation was firstly introduced by Harris (1970). More recent research on morphological segmentation has been usually focused on unsupervised learning (Goldsmith, 2001; Creutz and Lagus, 2002; Poon et al., 2009; Narasimhan et al., 2015; Cao and Rei, 2016) , whose goal is to find the segmentation boundaries using an unlabeled set of word forms (or possibly a corpus too). Probably the most popular unsupervised systems are LINGUIS-TICA (Goldsmith, 2001) and MORFESSOR, with a number of variants (Creutz and Lagus, 2002; Creutz et al., 2007; Gr\u00f6nroos et al., 2014) . Another version of the latter which includes a semisupervised extension was introduced by (Kohonen et al., 2010) . Poon et al. (2009) presented a loglinear model which uses overlapping features for unsupervised morphological segmentation.",
"cite_spans": [
{
"start": 179,
"end": 196,
"text": "(Goldsmith, 2001;",
"ref_id": "BIBREF8"
},
{
"start": 197,
"end": 220,
"text": "Creutz and Lagus, 2002;",
"ref_id": "BIBREF7"
},
{
"start": 221,
"end": 239,
"text": "Poon et al., 2009;",
"ref_id": "BIBREF21"
},
{
"start": 240,
"end": 264,
"text": "Narasimhan et al., 2015;",
"ref_id": "BIBREF19"
},
{
"start": 265,
"end": 283,
"text": "Cao and Rei, 2016)",
"ref_id": "BIBREF4"
},
{
"start": 465,
"end": 482,
"text": "(Goldsmith, 2001)",
"ref_id": "BIBREF8"
},
{
"start": 524,
"end": 548,
"text": "(Creutz and Lagus, 2002;",
"ref_id": "BIBREF7"
},
{
"start": 549,
"end": 569,
"text": "Creutz et al., 2007;",
"ref_id": "BIBREF6"
},
{
"start": 570,
"end": 592,
"text": "Gr\u00f6nroos et al., 2014)",
"ref_id": "BIBREF11"
},
{
"start": 685,
"end": 707,
"text": "(Kohonen et al., 2010)",
"ref_id": "BIBREF18"
},
{
"start": 710,
"end": 728,
"text": "Poon et al. (2009)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In spite of the dominance of the unsupervised systems, as soon as even just a small amount of segmented training data is available, then the entirely unsupervised systems tend not to be competitive. Furthermore, unsupervised segmentation still has considerable weaknesses, including over-segmentation of roots and erroneous segmentation of affixes (Wang et al., 2016) . To deal with those limitations, recent works show a growing interest in semi-supervised and supervised approaches (Kohonen et al., 2010; Ruokolainen et al., 2013 Ruokolainen et al., , 2014 Sirts and Goldwater, 2013; Wang et al., 2016; Kann and Sch\u00fctze, 2016; Kann et al., 2018; Cotterell and Sch\u00fctze, 2017; Gr\u00f6nroos et al., 2019) which employ annotated morpheme boundaries in the training phase.",
"cite_spans": [
{
"start": 348,
"end": 367,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 484,
"end": 506,
"text": "(Kohonen et al., 2010;",
"ref_id": "BIBREF18"
},
{
"start": 507,
"end": 531,
"text": "Ruokolainen et al., 2013",
"ref_id": "BIBREF24"
},
{
"start": 532,
"end": 558,
"text": "Ruokolainen et al., , 2014",
"ref_id": "BIBREF25"
},
{
"start": 559,
"end": 585,
"text": "Sirts and Goldwater, 2013;",
"ref_id": "BIBREF26"
},
{
"start": 586,
"end": 604,
"text": "Wang et al., 2016;",
"ref_id": "BIBREF30"
},
{
"start": 605,
"end": 628,
"text": "Kann and Sch\u00fctze, 2016;",
"ref_id": "BIBREF16"
},
{
"start": 629,
"end": 647,
"text": "Kann et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 648,
"end": 676,
"text": "Cotterell and Sch\u00fctze, 2017;",
"ref_id": "BIBREF5"
},
{
"start": 677,
"end": 699,
"text": "Gr\u00f6nroos et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our work we designed and evaluated various machine learning models and trained them using only the annotated lexicon in a supervised manner. Our models do not leverage the unannotated data nor context information and only use the primary hand-annotated segmentation lexicons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experimental results show that our Bi-LSTM model perform slightly better than other models in boundary prediction for our hand-segmented Persian lexicon, while KNN (K-Nearest Neighbors algorithm) performs better when the whole word accuracy is under question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows: Section 2 addresses the related work on morphological segmentation. Section 3 describes the methodology and machine learning models used in this work. Section 4 introduces our hand-segmented Persian lexicon as well as related preprocessing phases. Section 5 presents the experiment results compared to some other baseline systems and finally Section 6 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Supervised morphological segmentation, i.e. using a lexicon (or a corpus) with annotated morpheme boundaries in the training phase, has attracted increasing attention in recent years. One of the most recent successful research directions on supervised morphological segmentation is the work of (Ruokolainen et al., 2013), whose authors employ CRF (Conditional Random Fields), a popular discriminative log-linear model to predict morpheme boundaries given their local sub-string contexts instead of learning a morpheme lexicon. (Ruokolainen et al., 2014) extended their work to semi-supervised learning version by exploiting some available unsupervised segmentation tech-niques into their CRF-based model via a feature set augmentation. (Ruokolainen et al., 2014) Long Short Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997 ) have recently achieved great success in sequence learning tasks, including outstanding results on sequential tasks such as machine translation (Sutskever et al., 2014) . Wang et al. (2016) proposed three types of window-based LSTM neural network models named Window LSTM, Multi-window LSTMs and Bidirectional Multi-Window LSTMs, in order to automatically learn sequence structures and predict morphological segmentations of words in a raw text. They used only word boundary information without any need for extra feature engineering in the training phase. The authors compared their models with selected supervised models as well as with an LSTM architecture (Wang et al., 2016) , and similarly to the work of Ruokolainen et al. 2013, their architecture is based on the whole text and context information instead of using only the lexicon. Cotterell and Sch\u00fctze (2017) increased the segmentation accuracy by employing semantic coherence information in their models. They used RNN (Recurrent Neural Network) to design a composition model. They also found that using RNN with dependency vector has the best results on vector approximation (Cotterell and Sch\u00fctze, 2017) .",
"cite_spans": [
{
"start": 527,
"end": 553,
"text": "(Ruokolainen et al., 2014)",
"ref_id": "BIBREF25"
},
{
"start": 736,
"end": 762,
"text": "(Ruokolainen et al., 2014)",
"ref_id": "BIBREF25"
},
{
"start": 802,
"end": 835,
"text": "(Hochreiter and Schmidhuber, 1997",
"ref_id": "BIBREF14"
},
{
"start": 981,
"end": 1005,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF27"
},
{
"start": 1008,
"end": 1026,
"text": "Wang et al. (2016)",
"ref_id": "BIBREF30"
},
{
"start": 1497,
"end": 1516,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 1678,
"end": 1706,
"text": "Cotterell and Sch\u00fctze (2017)",
"ref_id": "BIBREF5"
},
{
"start": 1814,
"end": 1844,
"text": "RNN (Recurrent Neural Network)",
"ref_id": null
},
{
"start": 1975,
"end": 2004,
"text": "(Cotterell and Sch\u00fctze, 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, using encoder-decoder models Bahdanau et al. (2014) (attention-based models) made some great successes in machine translation systems. Kann and Sch\u00fctze (2016) used an encoderdecoder model which encodes the input as a sequence of morphological tags of source and targets and feeds the model by sequence of letters of a source form. They select the final answer using a majority voting amongst their five different ensembled RNN encoder-decoder models. Kann and Sch\u00fctze (2016) , proposed a seq2seq (sequence-tosequence network) architecture for the word segmentation task. They used a bi-directional RNN to encode the input word (i.e. sequence of characters) and concatenated forward and backward hidden states yielded from two GRUs and pass the result vector to decoder part. The decoder is a single GRU which uses segmentation symbols for training. She introduced two multi-task training approaches as well as data augmentations to improve the quality of the presented model. She shows that neural seq2seq models perform on par with or bet-ter than other strong baselines for polysynthetic languages in a minimal-resource setting. Their suggested neural seq2seq models constitute the state of the art for morphological segmentation in high-resource settings and for (mostly) European languages (Kann et al., 2018) .",
"cite_spans": [
{
"start": 145,
"end": 168,
"text": "Kann and Sch\u00fctze (2016)",
"ref_id": "BIBREF16"
},
{
"start": 461,
"end": 484,
"text": "Kann and Sch\u00fctze (2016)",
"ref_id": "BIBREF16"
},
{
"start": 1304,
"end": 1323,
"text": "(Kann et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The main studied language in our work is Persian, which belongs to morphologically rich languages and which is powerful and versatile in word building. Having many affixes to form new words (over a hundred), and the ability to build affixes and especially prefixes from nouns, the Persian language is considered as an agglutinative language since it also frequently uses derivational agglutination to form new words from nouns, adjectives, and verbal stems. Hesabi (1988) claimed that Persian can derive more than 226 million words (Hesabi, 1988) .",
"cite_spans": [
{
"start": 458,
"end": 471,
"text": "Hesabi (1988)",
"ref_id": "BIBREF13"
},
{
"start": 532,
"end": 546,
"text": "(Hesabi, 1988)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To the best of our knowledge, the research on morphology of the Persian language is very limited. Rasooli et al. 2013claimed that performing morphological segmentation in the preprocessing phase of statistical machine translation could improve the quality of translations for morphology rich and complex languages. Although they segmented very low portion of Persian words (only some Persian verbs), the quality of their machine translation system increases by 1.9 points of BLEU score. Arabsorkhi and Shamsfard (2006) proposed a Minimum Description Length (MDL) based algorithm with some improvements for discovering the morphemes of Persian language through automatic analysis of corpora.",
"cite_spans": [
{
"start": 487,
"end": 518,
"text": "Arabsorkhi and Shamsfard (2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this work we decided to evaluate selected machine learning models including those featurebased machine learning approaches in which the task of word segmentation is reformulated as a classification task, as well as various deeplearning (DL for short) neural network models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Machine Learning Models",
"sec_num": "3"
},
{
"text": "Because of huge number of learned parameters in DL, having enough training data is critical. The fact that we decided to create a large handannotated dataset for Persian allows evaluating the effect of the training data size on a relatively wide scale, as described in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Machine Learning Models",
"sec_num": "3"
},
{
"text": "We convert all segmentations into a simple string format in which letters \"B\" and \"L\" encode the presence of the boundary letter and the continuation letter, respectively. For example for word \"goes\", the encoded segmentation is \"LLBL\", which shows that there is a segmentation boundary in front of the third letter (\"e\"). While in our model we consider only morphologically segmented lexicon and we do not employ any other information like corpus contexts or lists of unannotated words, this encoding is sufficient and make the specification of boundary location easy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Machine Learning Models",
"sec_num": "3"
},
{
"text": "In the case of presence of a semi-space letter (a feature specific for the Persian written language), the semi-space letter is always considered as a boundary letter. An experiments focused on this feature is described in Subsection 5.2.3, which shows that our models could perform better when this information exists in the annotated lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Machine Learning Models",
"sec_num": "3"
},
{
"text": "In the first setup, we convert the segmentation task (the task of segmenting a word into a sequence morphemes) simply to a set of independent binary decisions capturing the presence or absence of a segmentation boundary in front of each letter in the word. For this task, we use various standard off-the-shelf classifiers available in the Scikit-learn toolkit (Pedregosa et al., 2011) . So far, we provide the classifiers only with features that are extractable from the word alone. More specifically, we use only character-based features. These character-based features include letters and letter sequences (and their combinations) before and after under the character under question, which is subsequently assigned one out of two classes: \"B\" for boundary characters, and \"L\" which stands for continuation characters. The main task of these methods is then to train a classification model to classify all characters in the word into those two classes, given binary features based on surrounding characters. For example, for the fifth character of word \"hopeless\", some of our features could be: \"e\", \"le\", and \"ope\". The classification predictions are performed independently.",
"cite_spans": [
{
"start": 360,
"end": 384,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification-Based Segmentation Models",
"sec_num": "3.1"
},
{
"text": "Besides the classification-based segmentation models, we designed and evaluated five DL models based on GRU, LSTM, Bi-LSTM, seq2seq and Bi-LSTM with the attention mechanism, respectively. The first three models are illustrated in Figures 1 and 2 . The presented seq2seq model, is similar to the model described in (Gr\u00f6nroos et al., 2019) . The last presented model is an attention based model, which is shown in Figure 3 . In this model, we use Bi-LSTM as encoder and LSTM as attention layer, and finally, outputs of encoder and attention layers are added together. ",
"cite_spans": [
{
"start": 314,
"end": 337,
"text": "(Gr\u00f6nroos et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 230,
"end": 245,
"text": "Figures 1 and 2",
"ref_id": "FIGREF0"
},
{
"start": 412,
"end": 420,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Deep Neural Network Based Models",
"sec_num": "3.2"
},
{
"text": "In this section, the rich Persian hand-annotated dataset and the existing Finnish datasets from the Morpho-Challenge shared task 2010 (Virpioja et al., 2011) as well as the Czech dataset used in our experiments are described.",
"cite_spans": [
{
"start": 134,
"end": 157,
"text": "(Virpioja et al., 2011)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Segmentation Lexicons",
"sec_num": "4"
},
{
"text": "We extracted our primary word list from three different corpora. The first corpus contains sentences extracted from the Persian Wikipedia (Karimi et al., 2018) . The second one is popular Persian mono-lingual corpus BijanKhan (Bijankhan et al., 2011) , and the last one is Persian-NER 1 (Poostchi et al., 2018) . For all introduced corpora, using Hazm toolset (Persian preprocessing and tokenization tools) 2 and the stemming tool presented by Taghi-Zadeh et al. 2015, we extracted and normalized all sentences and in the final steps using our rulebased stemmer and a Persian lemma collection, all words are lemmatized and stemmed. Finally all semi-spaces are automatically detected and fixed. Words with more than 10 occurrences in the corpora were selected for manual annotation. We decided to send all 80K words to our 16 annotators in the way that each word is checked and annotated by two independent persons. Annotators decided about the lemma of a word under question, segmentation parts, plurality, ambiguity (whether a word has more than one meaning) or they might delete the word if they think is not a proper Persian word. Moreover, some segmentations predicted by our automatic segmentator with high confidence score were offered to our annotators. We removed almost 30K words which were selected to be deleted by both annotators. And remaining 50K words sent for inter-annotation comparison part. In this step, all disagreements were checked and corrected by the authors of this paper and finally all words were quickly reviewed by two Persian linguists. The whole process took around six weeks. In order to use a hand-annotated lexicon in our work, we extracted the segmentation part from the dataset and converted it to our binary model which is described in Section 3.",
"cite_spans": [
{
"start": 138,
"end": 159,
"text": "(Karimi et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 226,
"end": 250,
"text": "(Bijankhan et al., 2011)",
"ref_id": "BIBREF3"
},
{
"start": 287,
"end": 310,
"text": "(Poostchi et al., 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Persian Hand-Annotated Morphological Segmentation Dataset",
"sec_num": "4.1"
},
{
"text": "The total number of words we used in our Persian dataset is 40K. The dataset is publicly available in the LINDAT/CLARIN repository (Ansari et al., 2019) .",
"cite_spans": [
{
"start": 131,
"end": 152,
"text": "(Ansari et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Persian Hand-Annotated Morphological Segmentation Dataset",
"sec_num": "4.1"
},
{
"text": "We downloaded the Finnish segmentation dataset from the Morpho-Challenge shared task 2010 3 (Virpioja et al., 2011) and converted them into our binary format. The Finnish dataset contains 2000 segmented words. While comparing to our handannotated Persian dataset these datasets are small, we used them to see the efficiency of our presented models when the size of training dataset is limited. The Czech dataset results from a prototype segmentation annotation of Czech words. A sample of 1000 lemmas were selected randomly from De-riNet, which is a lexical database focus on derivation in Czech (\u017dabokrtsk\u00fd et al., 2016) . The lemmas were manually segmented by two independent annotators, and all annotation differences were resolved subsequently during a third pass through the data. The annotation resulted in 4.6 morphemes per word, partially as a result of the fact that the lemmas were sampled uniformly, regardless of their corpus frequency, and thus the selection is biased towards longer words.",
"cite_spans": [
{
"start": 92,
"end": 115,
"text": "(Virpioja et al., 2011)",
"ref_id": "BIBREF29"
},
{
"start": 590,
"end": 621,
"text": "Czech (\u017dabokrtsk\u00fd et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Existing Finnish and Czech Segmentation Datasets",
"sec_num": "4.2"
},
{
"text": "To partition our dataset (Persian, Czech and Finnish) into training, development and test sets a commonly used method is used (Ruokolainen et al., 2013), which involves sorting words according to their frequency and assigning every eighth term starting from the first one into the test set and every eighth term from the second into the development set, while moving the remaining terms into the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "In order to evaluate the effect of the training data size, we randomly select the first 1/64, 1/32, 1/16, 1/8, 3/8, 1/4, 1/2, 3/4 and all amount of data from the training set to carry out experiments with different training sizes. In all experiments, we report three evaluation measures: the number of correctly predicted morpheme boundaries (in terms of precision, recall, and f-measure), the percentage of correct binary predictions on all characters, and the percentage of correctly segmented words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "As described in Section 2, some previous works reported accuracy in terms of the number of correct predictions (boundary and word) in a running text, instead of considering unique words sampled from a lexicon. Hence we decided to also report such accuracy in our experiments in addition to our lexicon evaluation. For this new experiment, we selected a part of a mono-lingual text and after removing all presented words in the text from our training lexicon, the remaining segmented words are considered as the training set and finally accuracy of word segmentation of words in test sentences is reported separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "We used two baseline systems which we selected to compare our models with. The first baseline is an unsupervised version of MORFESSOR, which is introduced and implemented by Creutz et al. (2007) . The second baseline is FlatCat (Gr\u00f6nroos et al., 2014) , which is a well-known semi-supervised version of MORFESSOR that uses the Hidden Markov Model for segmentation. In addition to the annotated data, semi-supervised MORFESSOR (i.e. FlatCat) uses a set of 100,000 word types following their frequency in the corpus as their unannotated training dataset. For both baselines, the best performing model is selected and compared with our neural network based models.",
"cite_spans": [
{
"start": 174,
"end": 194,
"text": "Creutz et al. (2007)",
"ref_id": "BIBREF6"
},
{
"start": 228,
"end": 251,
"text": "(Gr\u00f6nroos et al., 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.1"
},
{
"text": "As described in Section 4, we designed various models for the morphological segmentation task. In the following subsections, different experiments done in this work are reviewed. In all tables, the column entitled by W% indicates the proportion of perfectly segmented words. The column entitled by Ch% indicated the accuracy of characters which are classified as boundary or non-boundary. Finally, P%, R%, and F% indicate precision, recall and F-measure score respectively for the morpheme boundary detection, naturally excluding the trivial final position characters from our evaluation. Table 1 shows the evaluation results of morphological segmentation using our Persian handannotated dataset if the whole training data is used. For each model, only results of the bestperforming hyperparameter configuration are reported. As is shown in Table 1 , our Bi-LSTM model performs slightly better than the rest in boundary prediction, however, the classification models are surprisingly almost on the par with our complex DL model. Considering word accuracy, classification models are performing better than DL models. A possible explanation for this is that the classification models make use of n-gram features and handle the characteristics of the whole word more efficiently than sequence-based models. Moreover, regarding our experiments, the presented seq2seq model does not perform well. An explanation could be that while there is not any available context information, the used attention mechanism does not have any far parts to make a relation between them. Moreover, our Bi-LSTM with the attention mechanism does not perform better than normal Bi-LSTM either. Finally, Tables 2 and 3 show the results of this experiment on two other languages, Finnish and Czech, for which the sizes of training data are very limited comparing the Persian dataset. As we expected, with so small training data, the classification methods perform better than more complex DL strategies. Table 4 shows a comparison of our DL models, when different LSTM output sizes and drop-out thresholds are tested. Only two best-performing models (LSTM and Bi-LSTM) are shown.",
"cite_spans": [],
"ref_spans": [
{
"start": 589,
"end": 596,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 841,
"end": 848,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1976,
"end": 1983,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "As is seen in the tables, the classification models perform well when compared to more complex DL models. One explanation for this evidence is the lack of any external information (other than a segmented lexicon) which limits the number of Table 4 : Effect of using different hyperparameters on LSTM and Bi-LSTM models, two best performing deep neural network models for Persian dataset possible features from the training data. For example there is no information about some previous words, and consequently RNN-based models can not learn any information about distant previous characters in the training phase. Possibly, this also explains the inferior performance of our seq2seq model compared to the Bi-LSTM model implemented for this work.",
"cite_spans": [],
"ref_spans": [
{
"start": 240,
"end": 247,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison of Different Models",
"sec_num": "5.2.1"
},
{
"text": "Finally, Table 5 shows results of selected models when the segmentation is done on all words occurring in a corpus instead of a segmented lexicon. In this experiments we expected those words with more frequency has higher effect on results comparing with less frequent words.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Comparison of Different Models",
"sec_num": "5.2.1"
},
{
"text": "In order to evaluate the effect of the training data size on our DL models, different amount of training data are selected from and feed to our models. Figure 4 and Figure 5 demonstrate an experiment in which the baseline line is the results of unsupervised version of MORFESSOR for similar test dataset. Only four best performing feature-based models in addition to two DL-based models are selected to be shown here. As this figure shows, after having more than 10K training instances, increasing the training data further does not have a substantial effect any more.",
"cite_spans": [],
"ref_spans": [
{
"start": 152,
"end": 160,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 165,
"end": 173,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Effect of Training Data Size",
"sec_num": "5.2.2"
},
{
"text": "An important feature of the Persian and Arabic languages is the existence of semi-space. For example word \" \u202b\"\u06a9\u202c (books) is a combination of word \" \u202b\"\u06a9\u202c and \" \", in which the former is Persian translation of word \"book\" and the latter is morpheme for a plural form. We can say these semi-space signs segment words into smaller morphemes. However, in formal writing and in all Persian normal corpora, this space is neglected frequently and it could make a lot of problems in Persian and Arabic morphological segmentation task. For example both forms for the previous example, \" \u202b\"\u06a9\u202c and \" \u202b\"\u06a9\u202c , are considered correct in Persian text and have the same meaning. To deal with this problem and in order to improve the quality of our segmentation dataset, we implemented a preprocessor to distinguish this kind of space in Persian words and consequently our hand-annotated dataset contains these semispaces correctly. While we wanted to test the effect of having this prior knowledge in the lexicon, we evaluated our models in two different forms. In the first case, we used our hand annotated dataset as is. In the second case, we removed all semispaces from the lexicon. Table 6 shows a comparison for deploying our models on these two different datasets and as could be seen in this table, having the accurate dataset which is created by our preprocessing strategy could improve results drastically.",
"cite_spans": [],
"ref_spans": [
{
"start": 1169,
"end": 1176,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semi-Space Feature for Persian Words",
"sec_num": "5.2.3"
},
{
"text": "The main task of this work is to evaluate different supervised models to find the best segmentation of a word when only a segmented lexicon without any extra information is available in the training phase. In recent years, recurrent neural networks (RNN) attracted a growing interest in morphological analysis, that is why we decided to design and evaluate various neural network based models (LSTM, Bi-LSTM, GRU, and attention based models) as well as some machine learning classification models including SVM, Random Forest, Logistic Regression and others for our morphological segmentation task. While a critical point in any DL model is the training data size, we decided to create a rich hand annotated Persian lexicon which is the only segmented corpus for Persian words. Using this lexicon we evaluated our presented models as well as the effect of training data size on results. Moreover, we evaluated and tested our models on some limited datasets for Czech and Finnish languages. Experimental results show our Bi-LSTM model performs slightly better in boundary prediction, however the results of classification-based approaches overcome the DL models in percentage of completely correctly segmented words. Table 6 : The effect of considering semi-space on training data when all training data are used.",
"cite_spans": [],
"ref_spans": [
{
"start": 1216,
"end": 1223,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Czech Republic (project LM2015071).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/HaniehP/ PersianNER 2 https://github.com/sobhe/hazm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://morpho.aalto.fi/events/ morphochallenge2010/datasets.shtml",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research was supported by OP RDE project No. CZ.02.2.69/0.0/0.0/16 027/0008495, International Mobility of Researchers at Charles University, and by grant No. 19-14534S of the Grant Agency of the Czech Republic. It has been using language resources developed, stored, and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Persian Morphologically Segmented Lexicon 0.5. LIN-DAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL",
"authors": [
{
"first": "Ebrahim",
"middle": [],
"last": "Ansari",
"suffix": ""
},
{
"first": "Hamid",
"middle": [],
"last": "Zden\u011bk\u017eabokrtsk\u00fd",
"suffix": ""
},
{
"first": "Mahshid",
"middle": [],
"last": "Haghdoost",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nikravesh",
"suffix": ""
}
],
"year": 2019,
"venue": "Faculty of Mathematics and Physics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ebrahim Ansari, Zden\u011bk\u017dabokrtsk\u00fd, Hamid Hagh- doost, and Mahshid Nikravesh. 2019. Persian Morphologically Segmented Lexicon 0.5. LIN- DAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University, https://hdl.handle.net/11234/1-3011.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised Discovery of Persian Morphemes",
"authors": [
{
"first": "Mohsen",
"middle": [],
"last": "Arabsorkhi",
"suffix": ""
},
{
"first": "Mehrnoush",
"middle": [],
"last": "Shamsfard",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Eleventh Conference of the European Chapter of the Association for Computational Linguistics: Posters Demonstrations",
"volume": "",
"issue": "",
"pages": "175--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohsen Arabsorkhi and Mehrnoush Shamsfard. 2006. Unsupervised Discovery of Persian Morphemes. In Proceedings of the Eleventh Conference of the European Chapter of the Association for Computational Linguistics: Posters Demonstra- tions. Association for Computational Linguistics, Stroudsburg, PA, USA, EACL '06, pages 175-178.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. Cite arxiv:1409.0473Comment: Accepted at ICLR 2015 as oral presentation. http://arxiv.org/abs/1409.0473.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Lessons from building a Persian written corpus: Peykare. Language Resources and Evaluation",
"authors": [
{
"first": "Mahmood",
"middle": [],
"last": "Bijankhan",
"suffix": ""
},
{
"first": "Javad",
"middle": [],
"last": "Sheykhzadegan",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Bahrani",
"suffix": ""
},
{
"first": "Masood",
"middle": [],
"last": "Ghayoomi",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "45",
"issue": "",
"pages": "143--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahmood Bijankhan, Javad Sheykhzadegan, Mo- hammad Bahrani, and Masood Ghayoomi. 2011. Lessons from building a Persian written corpus: Peykare. Language Resources and Evaluation 45(2):143-164.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A joint model for word embedding and word morphology",
"authors": [
{
"first": "Kris",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Representation Learning for NLP. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "18--26",
"other_ids": {
"DOI": [
"10.18653/v1/W16-1603"
]
},
"num": null,
"urls": [],
"raw_text": "Kris Cao and Marek Rei. 2016. A joint model for word embedding and word morphology. In Proceedings of the 1st Workshop on Representa- tion Learning for NLP. Association for Computa- tional Linguistics, Berlin, Germany, pages 18-26. https://doi.org/10.18653/v1/W16-1603.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Joint semantic synthesis and morphological analysis of the derived word",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell and Hinrich Sch\u00fctze. 2017. Joint semantic synthesis and morphological analysis of the derived word. CoRR abs/1701.00946.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Morphbased speech recognition and modeling of outof-vocabulary words across languages",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Teemu",
"middle": [],
"last": "Hirsim\u00e4ki",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
},
{
"first": "Antti",
"middle": [],
"last": "Puurula",
"suffix": ""
},
{
"first": "Janne",
"middle": [],
"last": "Pylkk\u00f6nen",
"suffix": ""
},
{
"first": "Vesa",
"middle": [],
"last": "Siivola",
"suffix": ""
},
{
"first": "Matti",
"middle": [],
"last": "Varjokallio",
"suffix": ""
}
],
"year": 2007,
"venue": "ACM Trans. Speech Lang. Process",
"volume": "5",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/1322391.1322394"
]
},
"num": null,
"urls": [],
"raw_text": "Mathias Creutz, Teemu Hirsim\u00e4ki, Mikko Ku- rimo, Antti Puurula, Janne Pylkk\u00f6nen, Vesa Siivola, Matti Varjokallio, Ebru Arisoy, Murat Sara\u00e7lar, and Andreas Stolcke. 2007. Morph- based speech recognition and modeling of out- of-vocabulary words across languages. ACM Trans. Speech Lang. Process. 5(1):3:1-3:29. https://doi.org/10.1145/1322391.1322394.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unsupervised discovery of morphemes",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 Workshop on Morphological and Phonological Learning. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "21--30",
"other_ids": {
"DOI": [
"10.3115/1118647.1118650"
]
},
"num": null,
"urls": [],
"raw_text": "Mathias Creutz and Krista Lagus. 2002. Un- supervised discovery of morphemes. In Pro- ceedings of the ACL-02 Workshop on Mor- phological and Phonological Learning. Associa- tion for Computational Linguistics, pages 21-30. https://doi.org/10.3115/1118647.1118650.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unsupervised learning of the morphology of a natural language",
"authors": [
{
"first": "John",
"middle": [],
"last": "Goldsmith",
"suffix": ""
}
],
"year": 2001,
"venue": "Comput. Linguist",
"volume": "27",
"issue": "2",
"pages": "153--198",
"other_ids": {
"DOI": [
"10.1162/089120101750300490"
]
},
"num": null,
"urls": [],
"raw_text": "John Goldsmith. 2001. Unsupervised learn- ing of the morphology of a natural lan- guage. Comput. Linguist. 27(2):153-198.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "North S\u00e1mi morphological segmentation with low-resource semi-supervised sequence labeling",
"authors": [
{
"first": "Stig-Arne",
"middle": [],
"last": "Gr\u00f6nroos",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fifth International Workshop on Computational Linguistics for Uralic Languages. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "15--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stig-Arne Gr\u00f6nroos, Sami Virpioja, and Mikko Ku- rimo. 2019. North S\u00e1mi morphological seg- mentation with low-resource semi-supervised se- quence labeling. In Proceedings of the Fifth In- ternational Workshop on Computational Linguis- tics for Uralic Languages. Association for Compu- tational Linguistics, Tartu, Estonia, pages 15-26. https://www.aclweb.org/anthology/W19-0302.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Morfessor Flat-Cat: An HMM-Based Method for Unsupervised and Semi-Supervised Learning of Morphology",
"authors": [
{
"first": "Stig-Arne",
"middle": [],
"last": "Gr\u00f6nroos",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Smit",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. Dublin City University and Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1177--1185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stig-Arne Gr\u00f6nroos, Sami Virpioja, Peter Smit, and Mikko Kurimo. 2014. Morfessor Flat- Cat: An HMM-Based Method for Unsuper- vised and Semi-Supervised Learning of Mor- phology. In Proceedings of COLING 2014, the 25th International Conference on Compu- tational Linguistics: Technical Papers. Dublin City University and Association for Computational Linguistics, Dublin, Ireland, pages 1177-1185. https://www.aclweb.org/anthology/C14-1111.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "From Phoneme to Morpheme",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zellig",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1970,
"venue": "",
"volume": "",
"issue": "",
"pages": "32--67",
"other_ids": {
"DOI": [
"10.1007/978-94-017-6059-1_2"
]
},
"num": null,
"urls": [],
"raw_text": "Zellig S. Harris. 1970. From Phoneme to Mor- pheme, Springer Netherlands, Dordrecht, pages 32- 67. https://doi.org/10.1007/978-94-017-6059-1 2.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Persian Affixes and Verbs",
"authors": [
{
"first": "Mahmoud",
"middle": [],
"last": "Hesabi",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahmoud Hesabi. 1988. Persian Affixes and Verbs, volume 1. Javidan.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 9(8):1735- 1780. https://doi.org/10.1162/neco.1997.9.8.1735.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Fortification of neural morphological segmentation models for polysynthetic minimalresource languages",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Jesus",
"middle": [],
"last": "Manuel Mager",
"suffix": ""
},
{
"first": "Ivan Vladimir Meza",
"middle": [],
"last": "Hois",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Ruiz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "47--57",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1005"
]
},
"num": null,
"urls": [],
"raw_text": "Katharina Kann, Jesus Manuel Mager Hois, Ivan Vladimir Meza Ruiz, and Hinrich Sch\u00fctze. 2018. Fortification of neural morphological segmentation models for polysynthetic minimal- resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Lin- guistics, New Orleans, Louisiana, pages 47-57. https://doi.org/10.18653/v1/N18-1005.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "62--70",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2010"
]
},
"num": null,
"urls": [],
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016. MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection. In Pro- ceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonol- ogy, and Morphology. Association for Computa- tional Linguistics, Berlin, Germany, pages 62-70. https://doi.org/10.18653/v1/W16-2010.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Extracting an English-Persian Parallel Corpus from Comparable Corpora",
"authors": [
{
"first": "Akbar",
"middle": [],
"last": "Karimi",
"suffix": ""
},
{
"first": "Ebrahim",
"middle": [],
"last": "Ansari",
"suffix": ""
},
{
"first": "Bahram Sadeghi",
"middle": [],
"last": "Bigham",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akbar Karimi, Ebrahim Ansari, and Bahram Sadeghi Bigham. 2018. Extracting an English-Persian Paral- lel Corpus from Comparable Corpora. In Proceed- ings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018..",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semi-supervised extensions to Morfessor Baseline",
"authors": [
{
"first": "Oskar",
"middle": [],
"last": "Kohonen",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Lepp\u00e4nen",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oskar Kohonen, Sami Virpioja, Laura Lepp\u00e4nen, and Krista Lagus. 2010. Semi-supervised extensions to Morfessor Baseline.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "An unsupervised method for uncovering morphological chains",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "157--167",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00130"
]
},
"num": null,
"urls": [],
"raw_text": "Karthik Narasimhan, Regina Barzilay, and Tommi Jaakkola. 2015. An unsupervised method for uncov- ering morphological chains. Transactions of the As- sociation for Computational Linguistics 3:157-167. https://doi.org/10.1162/tacl a 00130.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Pas- sos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12:2825-2830.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Unsupervised morphological segmentation with log-linear models",
"authors": [
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "209--217",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoifung Poon, Colin Cherry, and Kristina Toutanova. 2009. Unsupervised morphological segmentation with log-linear models. In Proceedings of Hu- man Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Strouds- burg, PA, USA, NAACL '09, pages 209-217.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "BiLSTM-CRF for Persian Named-Entity Recognition ArmanPersoNERCorpus: the First Entity-Annotated Persian Dataset",
"authors": [
{
"first": "Hanieh",
"middle": [],
"last": "Poostchi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ehsan Zare",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Borzeshi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Piccardi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanieh Poostchi, Ehsan Zare Borzeshi, and Massimo Piccardi. 2018. BiLSTM-CRF for Persian Named- Entity Recognition ArmanPersoNERCorpus: the First Entity-Annotated Persian Dataset. In Proceed- ings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018..",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Orthographic and Morphological Processing for Persian-to-English Statistical Machine Translation",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Sadegh Rasooli",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"El"
],
"last": "Kholy",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing. Asian Federation of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1047--1051",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Sadegh Rasooli, Ahmed El Kholy, and Nizar Habash. 2013. Orthographic and Morpho- logical Processing for Persian-to-English Statisti- cal Machine Translation. In Proceedings of the Sixth International Joint Conference on Natural Language Processing. Asian Federation of Natural Language Processing, Nagoya, Japan, pages 1047- 1051. https://www.aclweb.org/anthology/I13-1144.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Supervised Morphological Segmentation in a Low-Resource Learning Setting using Conditional Random Fields",
"authors": [
{
"first": "Oskar",
"middle": [],
"last": "Teemu Ruokolainen",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Kohonen",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "29--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Teemu Ruokolainen, Oskar Kohonen, Sami Virpioja, and Mikko Kurimo. 2013. Supervised Morphologi- cal Segmentation in a Low-Resource Learning Set- ting using Conditional Random Fields. In Proceed- ings of the Seventeenth Conference on Computa- tional Natural Language Learning. Association for Computational Linguistics, Sofia, Bulgaria, pages 29-37. https://www.aclweb.org/anthology/W13-",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Painless semisupervised morphological segmentation using conditional random fields",
"authors": [
{
"first": "Oskar",
"middle": [],
"last": "Teemu Ruokolainen",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Kohonen",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "84--89",
"other_ids": {
"DOI": [
"10.3115/v1/E14-4017"
]
},
"num": null,
"urls": [],
"raw_text": "Teemu Ruokolainen, Oskar Kohonen, Sami Virpi- oja, and Mikko Kurimo. 2014. Painless semi- supervised morphological segmentation using con- ditional random fields. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers. Association for Computational Linguistics, Gothenburg, Sweden, pages 84-89. https://doi.org/10.3115/v1/E14-4017.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Minimallysupervised morphological segmentation using adaptor grammars",
"authors": [
{
"first": "Kairit",
"middle": [],
"last": "Sirts",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2013,
"venue": "TACL",
"volume": "1",
"issue": "",
"pages": "255--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kairit Sirts and Sharon Goldwater. 2013. Minimally- supervised morphological segmentation using adap- tor grammars. TACL 1:255-266.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. CoRR abs/1409.3215.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A new hybrid stemming method for Persian language. Digital Scholarship in the Humanities",
"authors": [
{
"first": "Hossein",
"middle": [],
"last": "Taghi-Zadeh",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [
"Hadi"
],
"last": "Sadreddini",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Hasan Diyanati",
"suffix": ""
},
{
"first": "Amir Hossein",
"middle": [],
"last": "Rasekh",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "32",
"issue": "",
"pages": "209--221",
"other_ids": {
"DOI": [
"10.1093/llc/fqv053"
]
},
"num": null,
"urls": [],
"raw_text": "Hossein Taghi-Zadeh, Mohammad Hadi Sadreddini, Mohammad Hasan Diyanati, and Amir Hos- sein Rasekh. 2015. A new hybrid stem- ming method for Persian language. Digi- tal Scholarship in the Humanities 32(1):209-221. https://doi.org/10.1093/llc/fqv053.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Empirical comparison of evaluation methods for unsupervised learning of morphology",
"authors": [
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ville",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Turunen",
"suffix": ""
},
{
"first": "Oskar",
"middle": [],
"last": "Spiegler",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kohonen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "45--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sami Virpioja, Ville T. Turunen, Sebastian Spiegler, Oskar Kohonen, and Mikko Kurimo. 2011. Empir- ical comparison of evaluation methods for unsuper- vised learning of morphology. TRAITEMENT AU- TOMATIQUE DES LANGUES 52(2):45-90.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Morphological Segmentation with Window LSTM Neural Networks",
"authors": [
{
"first": "Linlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhu",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Gerard",
"middle": [],
"last": "De Melo",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "2842--2848",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linlin Wang, Zhu Cao, Yu Xia, and Gerard de Melo. 2016. Morphological Segmentation with Window LSTM Neural Networks. In Dale Schuurmans and Michael P. Wellman, editors, AAAI. AAAI Press, pages 2842-2848.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Merging data resources for inflectional and derivational morphology in Czech",
"authors": [
{
"first": "Magda\u0161ev\u010d\u00edkov\u00e1",
"middle": [],
"last": "Zden\u011bk\u017eabokrtsk\u00fd",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Jon\u00e1\u0161",
"middle": [],
"last": "Vidra",
"suffix": ""
},
{
"first": "Ad\u00e9la",
"middle": [],
"last": "Limbursk\u00e1 ; Khalid",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Choukri",
"suffix": ""
},
{
"first": "Marko",
"middle": [],
"last": "Declerck",
"suffix": ""
},
{
"first": "Bente",
"middle": [],
"last": "Grobelnik",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maegaard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association",
"volume": "",
"issue": "",
"pages": "1307--1314",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zden\u011bk\u017dabokrtsk\u00fd, Magda\u0160ev\u010d\u00edkov\u00e1, Milan Straka, Jon\u00e1\u0161 Vidra, and Ad\u00e9la Limbursk\u00e1. 2016. Merging data resources for inflectional and derivational mor- phology in Czech. In Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Asunci\u00f3n Moreno, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association, Paris, France, pages 1307-1314.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "The schema of the LSTM/GRU models used in this experiments.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "The schema of the Bi-LSTM model used in this experiments.",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "The schema of the Bi-LSTM with the attention mechanism model used in this experiments.",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "The effect of Persian training data size on boundary detection F-measure.",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "The effect of Persian training data size on whole-word segmentation accuracy.",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Result of applying our models on small Persian segmented lexicon. P%, R%, and F% indicate precision, recall and F-measure score respectively. W% means the percentage of number of correct predicted words and Ch% indicated the the accuracy of characters which are classified in two boundary or non-boundary classes."
},
"TABREF3": {
"html": null,
"num": null,
"content": "<table><tr><td>Model</td><td>P% / R% / F%</td><td>W%</td><td>Ch%</td></tr><tr><td>LSTM</td><td>69.64 / 36.44 / 47.82</td><td>04.19</td><td>69.77</td></tr><tr><td>GRU</td><td>74.72 / 27.23 / 39.92</td><td>00.59</td><td>63.86</td></tr><tr><td>Bi-LSTM</td><td>68.56 / 48.33 / 56.69</td><td>05.38</td><td>67.45</td></tr><tr><td>Bi-LSTM with Attention</td><td>66.62 / 71.16 / 68.81</td><td>08.98</td><td>72.16</td></tr><tr><td>SVC, Kernel: linear</td><td>84.28 / 70.84 / 76.98</td><td>20.95</td><td>83.88</td></tr><tr><td>SVC, Kernel: poly, Degree: 2</td><td>91.42 / 69.46 / 78.94</td><td>31.73</td><td>85.90</td></tr><tr><td>SVC, Kernel: rbf</td><td>91.39 / 67.40 / 77.59</td><td>30.53</td><td>85.19</td></tr><tr><td>SVC, Kernel: poly, Degree: 5</td><td>94.03 / 48.71 / 64.18</td><td>20.35</td><td>79.32</td></tr><tr><td>SVC, Kernel: poly, Degree: 3</td><td>90.95 / 60.37 / 72.57</td><td>25.14</td><td>82.64</td></tr><tr><td>Logistic Regression, Solver: sag</td><td>90.69 / 66.89 / 76.99</td><td>25.04</td><td>84.80</td></tr><tr><td>Logistic Regression, Solver: liblinear</td><td>90.69 / 66.89 / 76.99</td><td>25.04</td><td>84.80</td></tr><tr><td>Logistic Regression, Solver: lbfgs</td><td>90.69 / 66.89 / 76.99</td><td>25.04</td><td>84.80</td></tr><tr><td>KNeighbors, Neighbors: 5</td><td>82.18 / 79.93 / 81.04</td><td>28.74</td><td>85.77</td></tr><tr><td>KNeighbors, Neighbors: 10</td><td>87.50 / 76.15 / 81.24</td><td>29.34</td><td>86.62</td></tr><tr><td>KNeighbors, Neighbors: 30</td><td>82.18 / 79.93 / 81.04</td><td>28.74</td><td>85.77</td></tr><tr><td>Ada Boost, Estimators: 100</td><td>88.85 / 57.46 / 69.79</td><td>16.16</td><td>81.08</td></tr><tr><td>Decision Tree</td><td>78.46 / 56.26 / 65.53</td><td>15.56</td><td>77.49</td></tr><tr><td>Random Forest, Estimators: 10</td><td>91.42 / 65.86 / 76.57</td><td>29.34</td><td>84.67</td></tr><tr><td>Random Forest, Estimators: 100</td><td>91.76 / 68.78 / 76.82</td><td>29.34</td><td>85.77</td></tr><tr><td>Bernoulli Naive Bayes</td><td>85.94 / 74.44 / 79.77</td><td>26.94</td><td>85.64</td></tr><tr><td>Perceptron MaxIteration: 50</td><td>80.45 / 72.04 / 76.01</td><td>19.16</td><td>82.71</td></tr><tr><td>Unsupervised MORFESSOR</td><td>44.28 / 99.33 / 61.25</td><td>00.59</td><td>44.61</td></tr><tr><td>Supervised MORFESSOR</td><td>67.12 / 77.43 / 71.91</td><td>05.95</td><td>73.33</td></tr></table>",
"type_str": "table",
"text": "Result of applying our models on small Finnish segmented lexicon."
},
"TABREF4": {
"html": null,
"num": null,
"content": "<table><tr><td>Model</td><td>Parameters</td><td>P% / R% / F%</td><td>W%</td><td>Ch%</td></tr><tr><td>Bi-LSTM</td><td>Outstate: 25 Dropout: 0.2</td><td>89.44 / 82.80 / 86.00</td><td>59.44</td><td>91.73</td></tr><tr><td>Bi-LSTM</td><td>Outstate: 50 Dropout: 0.2</td><td>88.79 / 87.89 / 88.34</td><td>62.57</td><td>92.86</td></tr><tr><td>Bi-LSTM</td><td>Outstate: 70 Dropout: 0.2</td><td>91.39 / 88.85 / 90.10</td><td>64.51</td><td>93.70</td></tr><tr><td>Bi-LSTM</td><td>Outstate: 70 Dropout: 0.5</td><td>92.50 / 88.65 / 90.53</td><td>66.51</td><td>94.37</td></tr><tr><td>LSTM</td><td>Outstate: 25 Dropout: 0.2</td><td>91.69 / 83.00 / 87.13</td><td>62.32</td><td>92.45</td></tr><tr><td>LSTM</td><td>Outstate: 50 Dropout: 0.2</td><td>93.09 / 82.29 / 87.36</td><td>60.82</td><td>92.67</td></tr><tr><td>LSTM</td><td>Outstate: 70 Dropout: 0.2</td><td>90.09 / 87.55 / 88.80</td><td>64.10</td><td>93.20</td></tr><tr><td>LSTM</td><td>Outstate: 70 Dropout: 0.5</td><td>87.86 / 88.59 / 88.22</td><td>62.19</td><td>92.72</td></tr></table>",
"type_str": "table",
"text": "Result of applying our models on the Czech segmented lexicon."
},
"TABREF6": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Experiment results when a model is used to predict boundaries of Persian words of a small corpus instead of lexicon words. Only five best performing models are shown."
}
}
}
}