ACL-OCL / Base_JSON /prefixN /json /nsurl /2019.nsurl-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:48:58.563842Z"
},
"title": "The_Illiterati: Part-of-Speech Tagging for Magahi and Bhojpuri without Even Knowing the Alphabet",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Proisl",
"suffix": "",
"affiliation": {},
"email": "thomas.proisl@uos.de"
},
{
"first": "Peter",
"middle": [],
"last": "Uhrig",
"suffix": "",
"affiliation": {},
"email": "peter.uhrig@fau.de"
},
{
"first": "Philipp",
"middle": [],
"last": "Heinrich",
"suffix": "",
"affiliation": {},
"email": "philipp.heinrich@fau.de"
},
{
"first": "Andreas",
"middle": [],
"last": "Blombach",
"suffix": "",
"affiliation": {},
"email": "andreas.blombach@fau.de"
},
{
"first": "Sefora",
"middle": [],
"last": "Mammarella",
"suffix": "",
"affiliation": {},
"email": "sefora.mammarella@icloud.com"
},
{
"first": "Natalie",
"middle": [],
"last": "Dykes",
"suffix": "",
"affiliation": {},
"email": "natalie.mary.dykes@fau.de"
},
{
"first": "Besim",
"middle": [],
"last": "Kabashi",
"suffix": "",
"affiliation": {},
"email": "besim.kabashi@fau.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we describe the part-of-speechtagging experiments for Magahi and Bhojpuri that we conducted for our participation in the NSURL 2019 shared tasks 9 and 10 (Lowlevel NLP Tools for (Magahi|Bhojpuri) Language). We experiment with three different part-of-speech taggers and evaluate the impact of additional resources such as Brown clusters, word embeddings and transfer learning from additional tagged corpora in related languages. In a 10-fold cross-validation on the training data, our best-performing models achieve accuracies of 90.70% for Magahi and 94.08% for Bhojpuri. Accuracy increased to 94.79% for Magahi and dropped to 78.68% for Bhojpuri on the test data.",
"pdf_parse": {
"paper_id": "2019",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we describe the part-of-speechtagging experiments for Magahi and Bhojpuri that we conducted for our participation in the NSURL 2019 shared tasks 9 and 10 (Lowlevel NLP Tools for (Magahi|Bhojpuri) Language). We experiment with three different part-of-speech taggers and evaluate the impact of additional resources such as Brown clusters, word embeddings and transfer learning from additional tagged corpora in related languages. In a 10-fold cross-validation on the training data, our best-performing models achieve accuracies of 90.70% for Magahi and 94.08% for Bhojpuri. Accuracy increased to 94.79% for Magahi and dropped to 78.68% for Bhojpuri on the test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Magahi and Bhojpuri are two of the three principal languages of the Bihari group (Maithili being the third). There are competing categorizations of the Bihari group within the Indo-Aryan languages (see Grierson, 1903; Cardona, 1974; Jeffers, 1976) . While there are few Magahi speakers outside of Southern Bihar, Bhojpuri is spoken in parts of two Indian states, Western Bihar and Eastern Uttar Pradesh, and the Southwest of Nepal. According to the 2011 census, about 51 million people in India stated Bhojpuri as their mother tongue, and about 13 million did so for Magahi. However, these numbers may seriously underestimate the actual number of speakers, since speakers of both languages often name Hindi as their first language -the language used in schools, courts, and other public institutions (Verma, 2003b, p. 547) .",
"cite_spans": [
{
"start": 202,
"end": 217,
"text": "Grierson, 1903;",
"ref_id": "BIBREF3"
},
{
"start": 218,
"end": 232,
"text": "Cardona, 1974;",
"ref_id": "BIBREF2"
},
{
"start": 233,
"end": 247,
"text": "Jeffers, 1976)",
"ref_id": "BIBREF4"
},
{
"start": 800,
"end": 822,
"text": "(Verma, 2003b, p. 547)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Related Work",
"sec_num": "1"
},
{
"text": "Despite these numbers, comparatively few linguistic resources and NLP tools currently exist for both languages, with most of the scarce attention having gone towards Bhojpuri (e.g. Ojha et al., 2015) .",
"cite_spans": [
{
"start": 166,
"end": 199,
"text": "Bhojpuri (e.g. Ojha et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and Related Work",
"sec_num": "1"
},
{
"text": "It is beyond the scope of this paper and our own expertise to describe both languages in detail (but see, e.g., Verma, 2003b,a) . Among the features which appear pertinent to part-of-speech tagging of Magahi and Bhojpuri are SOV order, rich verb morphology, the extensive use of postpositions, and the unusual agreement system of Magahi (where the verb has to agree with subject and object simultaneously). Table 1 gives an overview of the two datasets of the shared task. While the training set for Bhojpuri is much larger, it also features a more fine-grained tagset. 2 Strategies and Systems",
"cite_spans": [
{
"start": 89,
"end": 127,
"text": "detail (but see, e.g., Verma, 2003b,a)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 407,
"end": 414,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction and Related Work",
"sec_num": "1"
},
{
"text": "We experiment with three different, freely available part-of-speech taggers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Taggers",
"sec_num": "2.1"
},
{
"text": "\u2022 SoMeWeTa (Proisl, 2018) , a tagger based on the averaged structured perceptron that supports domain adaptation and can incorporate external information sources such as Brown clusters. 1",
"cite_spans": [
{
"start": 11,
"end": 25,
"text": "(Proisl, 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Taggers",
"sec_num": "2.1"
},
{
"text": "\u2022 A BiLSTM+CRF sequence tagger by Guillaume Genthial that uses character and word embeddings and supports transfer learning. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Taggers",
"sec_num": "2.1"
},
{
"text": "\u2022 The Stanford Tagger (Toutanova et al., 2003) , which is based on a maximum entropy cyclic dependency network. 3",
"cite_spans": [
{
"start": 22,
"end": 46,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Taggers",
"sec_num": "2.1"
},
{
"text": "In addition to the training data provided by the task organizers, we use the following freely available resources:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Resources",
"sec_num": "2.2"
},
{
"text": "\u2022 The Hindi UD treebank, which is based on the Hindi Dependency Treebank (HDTB; ca. 352,000 tokens; Bhat et al., 2017; Palmer et al., 2009) . 4",
"cite_spans": [
{
"start": 100,
"end": 118,
"text": "Bhat et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 119,
"end": 139,
"text": "Palmer et al., 2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Resources",
"sec_num": "2.2"
},
{
"text": "\u2022 A POS-tagged Magahi corpus (KMI-Mag; ca. 46,000 tokens) and a corpus of untagged Magahi texts (ca. 2.8 million tokens). 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Resources",
"sec_num": "2.2"
},
{
"text": "\u2022 Wikimedia dumps for Hindi (ca. 34.7 million tokens) and Bihari (ca. 700,000 tokens). 6 We extract the plain text using wikiextractor 7 and tokenize and sentence-split it using the ICU tokenizer via polyglot. 8",
"cite_spans": [
{
"start": 87,
"end": 88,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Resources",
"sec_num": "2.2"
},
{
"text": "\u2022 Brown clusters (Brown et al., 1992) computed from the tokenized Wikimedia dumps and the untagged Magahi corpus (1000 clusters, minimum frequency 5). 9",
"cite_spans": [
{
"start": 17,
"end": 37,
"text": "(Brown et al., 1992)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Resources",
"sec_num": "2.2"
},
{
"text": "1 https://github.com/tsproisl/SoMeWeTa 2 We use the slightly modified version by Riedl and Pad\u00f3 (2018) : https://github.com/riedlma/ sequence_tagging 3 https://nlp.stanford.edu/software/ tagger.html 4 https://github.com/ UniversalDependencies/UD_Hindi-HDTB/ tree/master 5 https://github.com/kmi-linguistics/ magahi 6 https://dumps.wikimedia.org 7 http://medialab.di.unipi.it/wiki/ Wikipedia_Extractor 8 http://polyglot-nlp.com/ 9 We use the implementation by Liang (2005) : https: //github.com/percyliang/brown-cluster",
"cite_spans": [
{
"start": 81,
"end": 102,
"text": "Riedl and Pad\u00f3 (2018)",
"ref_id": "BIBREF10"
},
{
"start": 428,
"end": 429,
"text": "9",
"ref_id": null
},
{
"start": 459,
"end": 471,
"text": "Liang (2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Resources",
"sec_num": "2.2"
},
{
"text": "\u2022 Pre-trained fastText embeddings for Hindi and Bihari 10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Resources",
"sec_num": "2.2"
},
{
"text": "The additional tagged Magahi corpus (KMI-Mag) is annotated with a tagset consisting of 35 tags which is almost identical to the 33-tag tagset used in the Bhojpuri corpus. KMI-Mag uses three tags that do not occur in the Bhojpuri data (V_VM_VF, V_VM_VNF and V_VM_VNP) and misses one tag that is used for Bhojpuri (RD_ECH_B). For our transfer learning experiments targeting Bhojpuri, we simply convert the three verb tags to V_VM. For targeting Magahi, we map the 35 tags to UD tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Resources",
"sec_num": "2.2"
},
{
"text": "The distinctive features of SoMeWeTa are its ability to leverage additional resources and its transfer learning or domain adaptation capabilities. Consequently, we focus on these two aspects in our experiments. For Bhojpuri, we experiment primarily with the Brown clusters computed from the Hindi and Bihari Wikimedia dumps and the untagged additional Magahi corpus (cf. section 2.2). Our crossvalidation experiments show that the Brown clusters have a small positive effect with the best results being obtained by Brown clusters computed from the union of all three additional corpora (cf. Table 2 ). With KMI-Mag we have a corpus of a closely related language that is annotated with an almost identical tagset (cf. section 2.2). However, pretraining on that and then adapting to Bhojpuri seems to have no noticeable effect.",
"cite_spans": [],
"ref_spans": [
{
"start": 591,
"end": 598,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments using SoMeWeTa",
"sec_num": "2.3"
},
{
"text": "For Magahi, we experiment with a wide range of transfer learning settings in addition to the different Brown clusters:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using SoMeWeTa",
"sec_num": "2.3"
},
{
"text": "\u2022 Pretraining on one of KMI-Mag, HDTB or the Bhojpuri dataset (mapped to UD tags).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using SoMeWeTa",
"sec_num": "2.3"
},
{
"text": "\u2022 Pretraining on all possible combinations of KMI-Mag, HDTB and the Bhojpuri dataset (using the concatenation of these resources).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using SoMeWeTa",
"sec_num": "2.3"
},
{
"text": "\u2022 Longer pretraining chains where we start with HDTB and adapt to one or two other resources before we make the final adaptation to Magahi. Table 2 : Bhojpuri results for SoMeWeTa. We report the mean accuracies and 95% confidence intervals of a 10-fold cross-validation on the training data. The model that we submitted to the shared task is set in italics. and the untagged additional Magahi corpus. As for Bhojpuri, transfer learning does not seem to have any noticeable effect (cf. Table 3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 2",
"ref_id": null
},
{
"start": 485,
"end": 492,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments using SoMeWeTa",
"sec_num": "2.3"
},
{
"text": "Neural networks with a BiLSTM-CRF architecture achieve POS-tagging results close to the current state of the art. 11 In our experiments, we focus less on the hyperparameters of the network but rather on the effects of our additional resources. We try out both the Hindi and Bihari fastText embeddings. Since the Bihari embeddings do not perform significantly better than the Hindi embeddings (cf . Table 4 ) and the Hindi embeddings cover a much larger vocabulary (15.3 million words instead of 8.9 million), we use the Hindi embeddings for our further experiments. In the following, we make use of the tagger's transfer learning abilities and pretrain the models on HDTB or KMI-Mag. The BiLSTM-CRF tagger seems to benefit more from the transfer learning setting than SoMeWeTa and achieves its best results for both languages with a transfer from KMI-Mag. Interestingly, the BiLSTM-CRF outperforms SoMeWeTa only on the Magahi dataset while it performs notably worse on the Bhojpuri dataset.",
"cite_spans": [
{
"start": 114,
"end": 116,
"text": "11",
"ref_id": null
}
],
"ref_spans": [
{
"start": 396,
"end": 405,
"text": ". Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments using the BiLSTM-CRF tagger",
"sec_num": "2.4"
},
{
"text": "The Stanford Log-linear Part-Of-Speech Tagger (Toutanova and Manning 2000; Toutanova et al. 2003 ) is a mature and stable tagger that still exhibits competitive performance. The system is feature-rich and offers a range of configuration options, the effects of which were initially not fully understood by our research group. It was thus decided to run extensive brute-force hyperparameter 11 Cf. https://aclweb.org/aclwiki/POS_ Tagging_ (State_of_the_art) tuning making educated guesses about the value ranges of the various parameters. The documentation in the JavaDoc for the MaxentTagger class 12 provides the necessary information. It was decided to set the following parameters with the values or ranges given in Table 5 and Table 6 .",
"cite_spans": [
{
"start": 46,
"end": 74,
"text": "(Toutanova and Manning 2000;",
"ref_id": "BIBREF12"
},
{
"start": 75,
"end": 96,
"text": "Toutanova et al. 2003",
"ref_id": "BIBREF11"
},
{
"start": 390,
"end": 392,
"text": "11",
"ref_id": null
},
{
"start": 438,
"end": 456,
"text": "(State_of_the_art)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 719,
"end": 738,
"text": "Table 5 and Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiments using the Stanford Tagger",
"sec_num": "2.5"
},
{
"text": "Combining all parameters results in 76,800 parameter combinations per language. Although training and testing can be completed in approximately 2 minutes on a modern personal computer, the sheer number of parameter combinations necessitated running the experiments on High-Performance-Computing infrastructure. The setup comprised a central queue of filenames of property files that all involved clients subscribed to.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using the Stanford Tagger",
"sec_num": "2.5"
},
{
"text": "For Magahi, only two runs with all parameter combinations were performed: one with the top 80% of the training data as actual training data and the bottom 20% as test data and one with the bottom 80% as training data and the top 20% as test data. The values discussed below are the arithmetic mean of the accuracies of those two runs. As the Magahi tagset is Universal-Dependencies-compliant, it was straightforward to identify closed class words by pos tag and to supply the list to the tagger during the training phase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using the Stanford Tagger",
"sec_num": "2.5"
},
{
"text": "For Bhojpuri, a full 10-fold cross-validation was carried out for each of the parameter combinations, so the averages discussed below are most likely more reliable than those for Magahi. Since the Bhojpuri tagset was more complicated, we decided to learn the closed class tags automatically based on the default closedClassTagThreshold of 40. Thus, a pos tag is only considered a closed class if it is assigned to less than 40 different words. Table 4 : Results for the BiLSTM-CRF tagger. We report the mean accuracies and 95% confidence intervals of a 10-fold cross-validation on the training data. The models submitted to the shared task are set in italics.",
"cite_spans": [],
"ref_spans": [
{
"start": 444,
"end": 451,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments using the Stanford Tagger",
"sec_num": "2.5"
},
{
"text": "Given that the training dataset is smaller than what is available for more commonly researched languages, we expected that for most thresholds, values below the default values might be more relevant than above, which is why our choice of parameter values is skewed towards smaller numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using the Stanford Tagger",
"sec_num": "2.5"
},
{
"text": "For both languages, performance decreases abruptly when rareWordThresh is set to 1. We exclude this setting for the remainder of the analysis, since it is obviously beneficial for the tagger to treat hapax legomena as rare words. Additionally, performance was insensitive to variation in veryCommonWordThresh since this value is ignored by the Tagger in our case. We thus fix the threshold at 250 and use simple linear models without interaction to analyze the influence of all other variables on performance measures:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using the Stanford Tagger",
"sec_num": "2.5"
},
{
"text": "acc. = \u03b2 0 + \u03b2 1 (unicodeshape) + \u03b2 2 (macro) + 6 j=3 \u03b2 j \u03b3 j + \u03b5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using the Stanford Tagger",
"sec_num": "2.5"
},
{
"text": "where \u03b2 i are the coefficients, \u03b3 j is one of the integer features (rareWordThresh, curWordMinFea-tureThresh, minFeatureThresh, rareWordMinFea-tureThresh), and \u03b5 is the residual error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using the Stanford Tagger",
"sec_num": "2.5"
},
{
"text": "Accuracy for Bhojpuri reaches around \u00b5 \u2248 93.88 with a standard deviation of approximately 0.064 and the linear model yielding an adjusted R 2 of approximately 0.80. For Magahi, overall performance is lower (\u00b5 \u2248 87.66) and variation is higher (\u03c3 \u2248 0.51), but this variation is wellexplained by the linear model (adjusted R 2 \u2248 0.98).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using the Stanford Tagger",
"sec_num": "2.5"
},
{
"text": "For both languages, the macro parameter has the most influence on accuracy. For Bhojpuri, the best macro is bidirectional5words (yielding ceteris paribus 0.09 and 0.12 better results compared to generic and left3words, respectively). For Magahi, however, generic parameter default value value/range closedClassTags (none) ADP AUX CCONJ DET NUM PART PRON SCONJ PUNCT arch -architecture generic generic, left3word, bidirectional5words arch -further unknown-words option (none) naacl2003unknowns arch -unicode shapes for rare words (none) unicodeshapes(-2,2), unicodeshapes(-1,1), unicodeshapes(0), (none) iterations 100 100 learnClosedClassTags false false curWordMinFeatureThresh 2 1..4 minFeatureThresh 5 1..5 rareWordMinFeatureThresh 10 1..10 rareWordThresh 5 1..8 veryCommonWordThresh 250 100, 150, 200, 250 and left3words give better results (both approximately 1.0 accuracy points better than bidirectional5words). This is surprising, since according to the authors of the Stanford Tagger, \"[t]he left3words architectures are faster, but slightly less accurate, than the bidirectional architectures.\" 13 The only viable explanation that comes to mind is that possibly the Magahi gold standard corpus was annotated with a trigram tagger without sufficient manual correction. This is in line with our observation that in the Magahi data, items that should have been classified as punctuation marks recieved dubious tags, e.g. the grave accent (') was tagged only twice as punctuation, but was categorized as a noun five times, twice as an adposition, once as a verb and once as an auxiliary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using the Stanford Tagger",
"sec_num": "2.5"
},
{
"text": "Examining only the respective best-performing macro, rareWordThresh explains most of the remaining variation, with a significant regression coefficient of about 0.02 for Bhojpuri and 0.07 for Magahi. However, the effect might de-crease for values higher than the ones tested here (rareWordThresh \u2208 {1, . . . , 8}).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using the Stanford Tagger",
"sec_num": "2.5"
},
{
"text": "unicodeshape has a small effect on performance for Bhojpuri, where (-1,1) and (-2,2) yield an increase in performance by about 0.06 compared to (0) and None. This effect cannot be confirmed for Magahi. For both languages, performance decreases in curWordThresh, curWordMin-FeatureThresh, and rareWordMinFeatureThresh, though the effect is negligible and not always significant. In both cases, minFeatureThresh does not have a significant influence on accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using the Stanford Tagger",
"sec_num": "2.5"
},
{
"text": "3 Results and Error Analysis",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments using the Stanford Tagger",
"sec_num": "2.5"
},
{
"text": "The overall results for Bhojpuri are delightful since they are even better than on our training data (see Table 7 ): Our optimized version of the Stanford tagger scored 95 points macro F 1 (94.78 accuracy), and we thus share first place with our sole competitor (team NITK-NLP); SoMeWeta and the BiLSTM tagger are close behind.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 113,
"text": "Table 7",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Bhojpuri",
"sec_num": "3.1"
},
{
"text": "We omit the very large confusion matrix (33\u00d733 and predominantly zero off the diagonal) \u2022 Two tags are not predicted by our tagger at all: RD_ECH_B (which appears once in the gold data and was misclassified as N_NN), and RD_UNK (classified once as N_NN and once as V_VM).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bhojpuri",
"sec_num": "3.1"
},
{
"text": "\u2022 RP_INJ appeared five times in the gold standard and was predicted correctly four times. This tag yields the worst recall (apart from the two pathological cases above).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bhojpuri",
"sec_num": "3.1"
},
{
"text": "\u2022 30 of the 195 occurrences of RD_SYM were misclassified (recall 84.6%), mostly as N_NN (26 cases).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bhojpuri",
"sec_num": "3.1"
},
{
"text": "\u2022 Further incorrect predictions of N_NN occur for JJ (11.3% of its occurrences classified as N_NN, 85.2% recall), RB (7.7%, 89.7% recall), and N_NNP (6.4%, 92.8% recall).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bhojpuri",
"sec_num": "3.1"
},
{
"text": "\u2022 Another notable confusion is the pair V_VM (87.8% recall) and V_VAUX (86.6% recall); V_VM was predicted as V_VAUX 64 times, while V_VAUX was tagged V_VM 66 times. Finally, V_VM was predicted as N_NN 85 times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bhojpuri",
"sec_num": "3.1"
},
{
"text": "The results for our other submissions were very much in line with the results discussed here. 15 All in all, the errors made by our submissions are very much what one would expect: Very rare categories are sometimes misclassified, very frequent categories (such as N_NN) tend to be the go-to label for misclassifications, and similar morphosyntactic categories are confused with each other (V_VM and V_AUX, N_NN and N_NNP).",
"cite_spans": [
{
"start": 94,
"end": 96,
"text": "15",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bhojpuri",
"sec_num": "3.1"
},
{
"text": "With a macro F 1 score of only 77%, our best submissions, SoMeWeTa (78.68 accuracy) and BiLSTM-CRF (78.86 accuracy), rank second in the task of predicting Magahi tags, closely behind the submissions of one of our competing teams (see Table 8 ). Results are peculiar, since this is a drop of more than ten points compared to our cross-validation on the training data set and far outside our realized confidence intervals (see Table 3) Figure 1 shows the confusion matrix for SoMeWeTa. 16 Major problems arise for tags ADJ (15.5% recall), ADV (14.8%), PART (32.5%), and PROPN and X (both 0%), since these are quite frequent categories with severe error rates. As with Bhojpuri, the tagger misclassifies them as NOUNs and VERBs, which are the most frequent open classes. Moreover, the tagger frequently mistakes VERB for AUX and vice versa.",
"cite_spans": [
{
"start": 484,
"end": 486,
"text": "16",
"ref_id": null
}
],
"ref_spans": [
{
"start": 234,
"end": 241,
"text": "Table 8",
"ref_id": "TABREF10"
},
{
"start": 425,
"end": 433,
"text": "Table 3)",
"ref_id": "TABREF4"
},
{
"start": 434,
"end": 442,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Magahi",
"sec_num": "3.2"
},
{
"text": "The results for Bhojpuri are very satisfying. Close to 95% accuracy on a set of 33 tags with approximately 95,000 words of training data is in line with our expectations. It is a bit disappointing, however, that mindless parameter-tuning yields the best results -but the difference may very well not be significant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "The results for Magahi are very disappointing. Since we do not know the language, it is difficult for us to pinpoint the exact reasons for the bad performance, be it an over-generalization of our taggers, a shift in the tag distribution in the test data or an issue with the annotation quality. At least, however, the use of additional resources outperforms mere parameter-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "https://fasttext.cc/docs/en/ crawl-vectors.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://nlp.stanford.edu/nlp/javadoc/ javanlp/edu/stanford/nlp/tagger/maxent/ MaxentTagger.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://nlp.stanford.edu/nlp/javadoc/ javanlp/edu/stanford/nlp/tagger/maxent/ ExtractorFrames.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We focus on recall; precision is mostly the same as recall for all frequent labels, and higher for rare ones, since the taggers avoid predicting infrequent labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "One notable exception is that the BiLSTM tagger did non predict the category RD_ECH at all (another hapax in the gold standard) but did include RD_ECH_B (once, incorrectly).16 Again, results are very similar for our other submissions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Hindi/Urdu treebank project",
"authors": [
{
"first": "Ahmad",
"middle": [],
"last": "Riyaz",
"suffix": ""
},
{
"first": "Rajesh",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Annahita",
"middle": [],
"last": "Bhatt",
"suffix": ""
},
{
"first": "Prescott",
"middle": [],
"last": "Farudi",
"suffix": ""
},
{
"first": "Bhuvana",
"middle": [],
"last": "Klassen",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Dipti",
"middle": [
"Misra"
],
"last": "Rambow",
"suffix": ""
},
{
"first": "Ashwini",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vaidya",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sri Ramagurumurthy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vishnu",
"suffix": ""
}
],
"year": 2017,
"venue": "Handbook of Linguistic Annotation",
"volume": "",
"issue": "",
"pages": "659--697",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riyaz Ahmad Bhat, Rajesh Bhatt, Annahita Farudi, Prescott Klassen, Bhuvana Narasimhan, Martha Palmer, Owen Rambow, Dipti Misra Sharma, Ash- wini Vaidya, Sri Ramagurumurthy Vishnu, et al. 2017. The Hindi/Urdu treebank project. In Nancy Ide and James Pustejovsky, editors, Handbook of Linguistic Annotation, pages 659-697. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [
"Della"
],
"last": "Vincent",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"V"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [
"C"
],
"last": "De Souza",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Lai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Vincent J. Della Pietra, Peter V. de Souza, Jennifer C. Lai, and Robert L. Mercer. 1992. Class-based n-gram models of natural lan- guage. Computational Linguistics, 18(4):467-479.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Indo-Aryan languages",
"authors": [
{
"first": "George",
"middle": [],
"last": "Cardona",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "9",
"issue": "",
"pages": "439--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Cardona. 1974. The Indo-Aryan languages, 15th edition, volume 9, pages 439-450.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "V: Indo-Aryan Family, Eastern Group, Pt. II: Specimens of the Bihari and Oriya Languages",
"authors": [
{
"first": "George",
"middle": [],
"last": "Abraham Grierson",
"suffix": ""
}
],
"year": 1903,
"venue": "Linguistic survey of India",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Abraham Grierson. 1903. Linguistic survey of India, Vol. V: Indo-Aryan Family, Eastern Group, Pt. II: Specimens of the Bihari and Oriya Languages. Office of the Superintendent of Government Print- ing, India, Calcutta.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The position of the Bih\u0101r\u012b dialects in Indo-Aryan",
"authors": [
{
"first": "Robert",
"middle": [
"J"
],
"last": "Jeffers",
"suffix": ""
}
],
"year": 1976,
"venue": "Indo-Iranian Journal",
"volume": "18",
"issue": "3",
"pages": "215--225",
"other_ids": {
"DOI": [
"10.1007/BF00162689"
]
},
"num": null,
"urls": [],
"raw_text": "Robert J. Jeffers. 1976. The position of the Bih\u0101r\u012b dialects in Indo-Aryan. Indo-Iranian Journal, 18(3):215-225.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Master's thesis, Massachusetts Institute of Technology",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang. 2005. Semi-supervised learning for nat- ural language. Master's thesis, Massachusetts Insti- tute of Technology, Department of Electrical Engi- neering and Computer Science.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Training & evaluation of POS taggers in Indo-Aryan languages: a case of Hindi, Odia and Bhojpuri",
"authors": [
{
"first": "Atul",
"middle": [],
"last": "Ku Ojha",
"suffix": ""
},
{
"first": "Pitambar",
"middle": [],
"last": "Behera",
"suffix": ""
},
{
"first": "Srishti",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Girish",
"middle": [
"N"
],
"last": "Jha",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 7th Language & Technology Conference (LTC 2015)",
"volume": "",
"issue": "",
"pages": "524--529",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Atul Ku Ojha, Pitambar Behera, Srishti Singh, and Girish N. Jha. 2015. Training & evaluation of POS taggers in Indo-Aryan languages: a case of Hindi, Odia and Bhojpuri. In Proceedings of the 7th Lan- guage & Technology Conference (LTC 2015), pages 524-529.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Hindi syntax: Annotating dependency, lexical predicate-argument structure, and phrase structure",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Rajesh",
"middle": [],
"last": "Bhatt",
"suffix": ""
},
{
"first": "Bhuvana",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Dipti",
"middle": [
"Misra"
],
"last": "Sharma",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
}
],
"year": 2009,
"venue": "The 7th International Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "14--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer, Rajesh Bhatt, Bhuvana Narasimhan, Owen Rambow, Dipti Misra Sharma, and Fei Xia. 2009. Hindi syntax: Annotating dependency, lexi- cal predicate-argument structure, and phrase struc- ture. In The 7th International Conference on Natu- ral Language Processing, pages 14-17.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "SoMeWeTa: A part-of-speech tagger for German social media and web texts",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Proisl",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "665--670",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Proisl. 2018. SoMeWeTa: A part-of-speech tagger for German social media and web texts. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018), pages 665-670, Miyazaki. European Lan- guage Resources Association.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "census data, Data on language and mother tongue, Statement 1: Abstract of speakers' strength of languages and mother tongues",
"authors": [],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Office of the Registrar General & Census Commis- sioner. 2011. 2011 census data, Data on language and mother tongue, Statement 1: Abstract of speak- ers' strength of languages and mother tongues.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A named entity recognition shootout for German",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Riedl",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018)",
"volume": "2",
"issue": "",
"pages": "120--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Riedl and Sebastian Pad\u00f3. 2018. A named en- tity recognition shootout for German. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018), Volume 2: Short Papers, pages 120-125, Melbourne.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Feature-rich part-ofspeech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "252--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Man- ning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In Proceedings of HLT-NAACL 2003, pages 252- 259, Edmonton.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Enriching the knowledge sources used in a maximum entropy part-of-speech tagger",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of EMNLP/VLC-2000",
"volume": "",
"issue": "",
"pages": "63--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova and Christopher D. Manning. 2000. Enriching the knowledge sources used in a maxi- mum entropy part-of-speech tagger. In Proceedings of EMNLP/VLC-2000, pages 63-70.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bhojpuri",
"authors": [
{
"first": "K",
"middle": [],
"last": "Manindra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Verma",
"suffix": ""
}
],
"year": 2003,
"venue": "The Indo-Aryan Languages",
"volume": "",
"issue": "",
"pages": "566--589",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manindra K. Verma. 2003a. Bhojpuri. In George Cardona and Dhanesh Jain, editors, The Indo-Aryan Languages, pages 566-589. Routledge, London.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The Indo-Aryan Languages",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "547--565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sheela Verma. 2003b. Magahi. In George Cardona and Dhanesh Jain, editors, The Indo-Aryan Languages, pages 547-565. Routledge, London.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table/>",
"text": "Sizes of the training and test sets and of the tagsets.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF4": {
"content": "<table><tr><td>model</td><td>accuracy</td></tr><tr><td>Magahi (Hindi embeddings)</td><td>88,97 (\u00b11,14)</td></tr><tr><td>Magahi (Bihari embeddings)</td><td>89,09 (\u00b11,00)</td></tr><tr><td>HDTB \u2192 Magahi (Hindi embeddings)</td><td>89,85 (\u00b10,99)</td></tr><tr><td>KMI-Mag \u2192 Magahi (Hindi embeddings)</td><td>90,70 (\u00b10,92)</td></tr><tr><td>Bhojpuri (Hindi embeddings)</td><td>90,78 (\u00b10,55)</td></tr><tr><td>Bhojpuri (Bihari embeddings)</td><td>90,80 (\u00b10,57)</td></tr><tr><td colspan=\"2\">KMI-Mag \u2192 Bhojpuri (Hindi embeddings) 91,23 (\u00b10,68)</td></tr></table>",
"text": "Magahi results for SoMeWeTa. We report the mean accuracies and 95% confidence intervals of a 10-fold cross-validation on the training data. The model that we submitted to the shared task is set in italics.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF5": {
"content": "<table><tr><td>parameter</td><td>default value</td></tr></table>",
"text": "Settings and parameters with ranges for the training of the Stanford PoS Tagger for Magahi.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF6": {
"content": "<table/>",
"text": "Settings and parameters with ranges for the training of the Stanford PoS Tagger for Bhojpuri.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF7": {
"content": "<table><tr><td colspan=\"2\">rank submission</td><td>F1</td></tr><tr><td>1</td><td>Stanford</td><td>95</td></tr><tr><td>1</td><td colspan=\"2\">NITK-NLP_SUB1 95</td></tr><tr><td>2</td><td>SoMeWeTa</td><td>93</td></tr><tr><td>3</td><td>BiLSTM-CRF</td><td>92</td></tr><tr><td>4</td><td colspan=\"2\">NITK-NLP_SUB2 89</td></tr></table>",
"text": "Confusion Matrix for SoMeWeTa predicting Magahi tags on the test data. Absolute numbers are given for all cells; shade represents recall (on the diagonal) and false positive rate, respectively. Actual labels can be found on the abscissa, predicted ones on the ordinate.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF8": {
"content": "<table><tr><td>: Results for Bhojpuri</td></tr><tr><td>and instead provide a quick summary for the Stan-</td></tr><tr><td>ford tagger: 14</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF10": {
"content": "<table/>",
"text": "Results for Magahi",
"type_str": "table",
"html": null,
"num": null
}
}
}
}