ACL-OCL / Base_JSON /prefixY /json /Y12 /Y12-1014.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y12-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:45:27.144938Z"
},
"title": "Indonesian Dependency Treebank: Annotation and Parsing",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Green",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Charles University",
"location": {
"settlement": "Prague"
}
},
"email": "green@ufal.mff.cuni.cz"
},
{
"first": "Septina",
"middle": [
"Dian"
],
"last": "Larasati",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Charles University",
"location": {
"settlement": "Prague"
}
},
"email": "larasati@ufal.mff.cuni.cz"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce and describe ongoing work in our Indonesian dependency treebank. We described characteristics of the source data as well as describe our annotation guidelines for creating the dependency structures. Reported within are the results from the start of the Indonesian dependency treebank. We also show ensemble dependency parsing and self training approaches applicable to under-resourced languages using our manually annotated dependency structures. We show that for an under-resourced language, the use of tuning data for a meta classifier is more effective than using it as additional training data for individual parsers. This meta-classifier creates an ensemble dependency parser and increases the dependency accuracy by 4.92% on average and 1.99% over the best individual models on average. As the data sizes grow for the the under-resourced language a meta classifier can easily adapt. To the best of our knowledge this is the first full implementation of a dependency parser for Indonesian. Using self-training in combination with our Ensemble SVM Parser we show aditional improvement. Using this parsing model we plan on expanding the size of the corpus by using a semi-supervised approach by applying the parser and correcting the errors, reducing the amount of annotation time needed.",
"pdf_parse": {
"paper_id": "Y12-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce and describe ongoing work in our Indonesian dependency treebank. We described characteristics of the source data as well as describe our annotation guidelines for creating the dependency structures. Reported within are the results from the start of the Indonesian dependency treebank. We also show ensemble dependency parsing and self training approaches applicable to under-resourced languages using our manually annotated dependency structures. We show that for an under-resourced language, the use of tuning data for a meta classifier is more effective than using it as additional training data for individual parsers. This meta-classifier creates an ensemble dependency parser and increases the dependency accuracy by 4.92% on average and 1.99% over the best individual models on average. As the data sizes grow for the the under-resourced language a meta classifier can easily adapt. To the best of our knowledge this is the first full implementation of a dependency parser for Indonesian. Using self-training in combination with our Ensemble SVM Parser we show aditional improvement. Using this parsing model we plan on expanding the size of the corpus by using a semi-supervised approach by applying the parser and correcting the errors, reducing the amount of annotation time needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Treebanks have been a major source for the advancement of many tools in the NLP pipeline from sentence alignment to dependency parsers to an end product, which is often machine translation. While useful for machine learning as well and linguistic analysis, these treebanks typically only exist for a handful of resource-rich languages. Treebanks tend to come in two linguistic forms, dependency based and constituency based each with their own pros and cons. Dependency treebanks have been made popular by treebanks such as the Prague dependency treebank (Hajic, 1998) and constituency treebanks by the Penn treebank (Marcus et al., 1993) . While some linguistic phenomena are better represented in one form instead of another, the two forms are generally able to be transformed into one another.",
"cite_spans": [
{
"start": 555,
"end": 568,
"text": "(Hajic, 1998)",
"ref_id": "BIBREF5"
},
{
"start": 617,
"end": 638,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While many of the world's 6,000+ languages could be considered under-resourced due to a limited number of native speakers and low overall population in their countries, Indonesia is the fourth most populous country in the world with over 23 million native and 215 million non-native Bahasa Indonesia speakers. The development of language resources, treebanks in particular, for Bahasa Indonesia will have an immediate effect for Indonesian NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Further development of our Indonesian dependency treebank can affect part of speech taggers, named entity recognizers, and machine translation systems. All of these systems have technical benefits to the 238 million native and non-native Indonesian speakers ranging for spell checkers, improved information retrieval, to improved access to more of the Web due to better page translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Some other NLP resources exist for Bahasa Indonesia as described in Section 2. While these are a nice start to language resources for Indonesian, dependency relations can have a positive effect on word reordering, long range dependencies, as well as anaphora resolution. Dependency relations have also been shown to be integral to deep syntactic transfer machine translation systems (\u017dabokrtsk\u00fd et al., 2008 ).",
"cite_spans": [
{
"start": 383,
"end": 407,
"text": "(\u017dabokrtsk\u00fd et al., 2008",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There was research done on developing a rule-base Indonesian constituency parser applying syntactic structure to Indonesian sentences. It uses a rulebased approach by defining the grammar using PC-PATR (Joice, 2002) . There was also research that applied the above constituency parser to create a probabilistic parser (Gusmita and Manurung, 2008) . To the best of our knowledge no dependency parser has been created and publicly released for Indonesian.",
"cite_spans": [
{
"start": 202,
"end": 215,
"text": "(Joice, 2002)",
"ref_id": "BIBREF7"
},
{
"start": 318,
"end": 346,
"text": "(Gusmita and Manurung, 2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Semi-supervised annotation has been shown to be a useful means to to increase the amount of annotated data in dependency parsing (Koo et al., 2008) , however typically for languages which already have plentiful annotated data such as Czech and English. Self-training was also shown to be useful in constituent parsing as means of seeing known tokens in new context (McClosky et al., 2008) . Our work differs in the fact that we examine the use of ensemble collaborative models' effect on the self-training loop as well as starting with a very reduced training set of 100 sentences. The use of model agreement features for our SVM classifier is useful in its approach since under-resourced languages will not need any additional analysis tools to create the classifier.",
"cite_spans": [
{
"start": 129,
"end": 147,
"text": "(Koo et al., 2008)",
"ref_id": "BIBREF8"
},
{
"start": 365,
"end": 388,
"text": "(McClosky et al., 2008)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Ensemble learning (Dietterich, 2000) has been used for a variety of machine learning tasks and recently has been applied to dependency parsing in various ways and with different levels of success. (Surdeanu and Manning, 2010; Haffari et al., 2011) showed a successful combination of parse trees through a linear combination of trees with various weighting formulations. Parser combination with dependency trees have been examined in terms of accuracy (Sagae and Lavie, 2006; Sagae and Tsujii, 2007; Zeman and\u017dabokrtsk\u00fd, 2005) . POS tags were used in parser combination in for combining a set of Malt Parser models with an SVM classifier with success, however we believe our work is novel in its use an SVM classifier solely on model agreements.",
"cite_spans": [
{
"start": 197,
"end": 225,
"text": "(Surdeanu and Manning, 2010;",
"ref_id": "BIBREF19"
},
{
"start": 226,
"end": 247,
"text": "Haffari et al., 2011)",
"ref_id": "BIBREF4"
},
{
"start": 451,
"end": 474,
"text": "(Sagae and Lavie, 2006;",
"ref_id": "BIBREF17"
},
{
"start": 475,
"end": 498,
"text": "Sagae and Tsujii, 2007;",
"ref_id": "BIBREF18"
},
{
"start": 499,
"end": 525,
"text": "Zeman and\u017dabokrtsk\u00fd, 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The treebank that we use in this work is a collection of manually annotated Indonesian dependency trees. It consists of 100 Indonesian sentences with 2705 tokens and a vocabulary size of 1015 unique tokens. The sentences are taken from the IDENTIC corpus (Larasati, 2012). The raw version of the sentences originally were taken from the BPPT articles in economy from the PAN localization (PAN, 2010) project output. The treebank used Parts-Of-Speech tags (POS tags) provided by MorphInd (Larasati et al., 2011) . Since the MorphInd output is ambiguous, the tags are also disambiguated and corrected manually, including the unknown POS tag. The distribution of the POS tags can be seen in Table 1 .",
"cite_spans": [
{
"start": 478,
"end": 510,
"text": "MorphInd (Larasati et al., 2011)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 688,
"end": 695,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Description",
"sec_num": "3"
},
{
"text": "The annotation is done using the visual tree editor, TreD (Pajas, 2000) and stored in CoNLL format (Buchholz and Marsi, 2006) for compatibility with several dependency parsers and other NLP tools.",
"cite_spans": [
{
"start": 58,
"end": 71,
"text": "(Pajas, 2000)",
"ref_id": "BIBREF15"
},
{
"start": 99,
"end": 125,
"text": "(Buchholz and Marsi, 2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Description",
"sec_num": "3"
},
{
"text": "Currently the annotation provided in this treebank is the unlabeled relationship between the head and its dependents. We follow a general annotation guidelines as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Description",
"sec_num": "4"
},
{
"text": "\u2022 The main head node of the sentence is attached to the ROOT node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Description",
"sec_num": "4"
},
{
"text": "\u2022 Similarly as the main head node, the sentence separator punctuation is also attached to the ROOT node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Description",
"sec_num": "4"
},
{
"text": "\u2022 The Subordinate Conjunction (with POS tag 'S-') nodes are attached to its subordinating clause head nodes. The subordinating clause head nodes are attached to its main clause head nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Description",
"sec_num": "4"
},
{
"text": "\u2022 The Coordination Conjunctions (with POS tag 'H-') nodes, that connect between two phrases (using the conjunction or commas), are attached to the first phrase head node. The second phrase head nodes are attached to the conjunction node. It follows this manner when there are more than two phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Description",
"sec_num": "4"
},
{
"text": "\u2022 The Coordination Conjunctions (with POS tag 'H-') nodes, that connect between two clauses (using the conjunction or commas), are attached to the first clause head node. The second clause head nodes are attached to the conjunction node. It follows this manner when there are more than two clauses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Description",
"sec_num": "4"
},
{
"text": "\u2022 The prepositions nodes with the POS tag 'R-' are the head of Prepositional Phrases (PP).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Description",
"sec_num": "4"
},
{
"text": "\u2022 In Quantitative Numeral Phrases such as \"3 thousand\", 'thousand' node will be the head and '3' node attached to 'thousand' node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Description",
"sec_num": "4"
},
{
"text": "In general, the trees have the verb of the main clause as the head of the sentence where the Subject and the Object are attached to it. In most cases, the most left noun tokens are the noun phrase head, since most of Indonesian noun phrases are constructed in Head-Modifier construction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Description",
"sec_num": "4"
},
{
"text": "Figure 1: Dependency tree example for the sentence \"He said that the rupiah stability protection is used so that there is no bad effect in economy.\" When dealing with small data sizes it is often not enough to show a simple accuracy increase. This increase can be very reliant on the training/tuning/testing data splits as well as the sampling of those sets. For this reason our experiments are conducted over 18 training/tuning/testing data split configurations which enumerates possible configurations for testing sizes of 5%,10%,20% and 30%. For each configuration we randomly sample without replacement the training/tuning/testing data and rerun the experiment 100 times, each time sampling new sets for training,tuning, and testing. These 1800 runs, each on different samples, allow us to better show the overall effect on the accuracy metric as well as the statistically significant changes as described in Section 5.1.5. Figure 2 shows this process flow for one run of this experiment.",
"cite_spans": [],
"ref_spans": [
{
"start": 928,
"end": 936,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Annotation Description",
"sec_num": "4"
},
{
"text": "Dependency parsing systems are often optimized for English or other major languages. This optimization, along with morphological complexities, leads other languages toward lower accuracy scores in many cases. The goal here is to show that while the corpus is not the same in size as most CoNLL data, a successful dependency parser can still be trained from the annotated data and provide semisupervised annotation to help increase the corpus size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsers",
"sec_num": "5.1.2"
},
{
"text": "Transition-based parsing creates a dependency structure that is parameterized over the transitions used to create a dependency tree. This is closely related to shift-reduce constituency parsing algorithms. The benefit of transition-based parsing is the use of greedy algorithms which have a linear time complexity. However, due to the greedy algorithms, longer arc parses can cause error propagation across each transition (K\u00fcbler et al., 2009) . We make use of Malt Parser , which in the CoNLL shared tasks was often tied with the best performing systems.",
"cite_spans": [
{
"start": 423,
"end": 444,
"text": "(K\u00fcbler et al., 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsers",
"sec_num": "5.1.2"
},
{
"text": "For the experiments in this paper we only use Malt Parser, but we use different training parameters to create various parsing models. For Malt Parser we use a total of 7 model variations as shown in ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsers",
"sec_num": "5.1.2"
},
{
"text": "We train our SVM classifier using only model agreement features. Using our tuning set, for each correctly predicted dependency edge, we create N 2 features where N is the number of parsing models. We do this for each model which predicted the correct edge in the tuning data. So for N = 3 the first feature would be a 1 if model 1 and model 2 agreed, feature 2 would be a 1 if model 1 and model 3 agreed, and so on. This feature set is widely applicable to many languages since it does not use any additional linguistic tools. For each edge in the ensemble graph, we use our classifier to predict which model should be correct, by first creating the model agreement feature set for the current edge of the unknown test data. The SVM predicts which model should be correct and this model then decides to which head the current node is attached. At the end of all the tokens in a sentence, the graph may not be connected and will likely have cycles. Using a Perl implementation of minimum spanning tree, in which each edge has a uniform weight, we obtain a minimum spanning forest, where each component is then connected and cycles are eliminated in order to achieve a well formed dependency structure. Figure 3 gives a graphical representation of how the SVM decision and MST algorithm create a final Ensemble parse tree which is similar to the construction used in Green and\u017dabokrtsk\u00fd, 2012) . Future iterations of this process could use a multi-label SVM or weighted edges based on the parser's accuracy on tuning data.",
"cite_spans": [
{
"start": 1365,
"end": 1391,
"text": "Green and\u017dabokrtsk\u00fd, 2012)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1201,
"end": 1209,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Ensemble SVM System",
"sec_num": "5.1.3"
},
{
"text": "Since this is a relatively small treebank and in order to confirm that our experiments are not heavily reliant on one particular sample of data we try a variety of data splits. To test the effects of the training, tuning, and testing data we try 18 different data split configurations, each one being sampled 100 times. The data splits in Section 5.2 use the format trainingtuning-testing. So 70-20-10 means we used 70% of the Indonesian Treebank for training, 20% for tuning the SVM classifier, and 10% for evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set Split Configurations",
"sec_num": "5.1.4"
},
{
"text": "Made a standard in the CoNLL shared tasks competition, two standard metrics for comparing dependency parsing systems are typically used. Labeled attachment score (LAS) and unlabeled attachment score (UAS). UAS studies the structure of a dependency tree and assesses how often the output has the correct head and dependency arcs. In addition to the structure score in UAS, LAS also measures the accuracy of the dependency labels on each arc (Buchholz and Marsi, 2006 ). Since we are mainly concerned with the structure of the ensemble parse, we report only UAS scores in this paper.",
"cite_spans": [
{
"start": 440,
"end": 465,
"text": "(Buchholz and Marsi, 2006",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.1.5"
},
{
"text": "To test statistical significance we use Wilcoxon paired signed-rank test. For each data split configuration we have 100 iterations of the experiment. Each model is compared against the same samples so a paired test is appropriate in this case. We report statistical significance values for p < 0.01. For each of the data splits, Table 3 shows the percent increase in our SVM system over both the average of the 7 individual models and over the best individual model. As the Table 3 shows, we obtain above average UAS scores in every data split. The increase is statistical significant in all data splits except one, the 90-5-5 split. This seems to be logical since this data split has the least difference in training data between systems, with only 5% tuning data. Our highest average UAS score was with the 70-20-10 split with a UAS of 62.48%. The use of 20% tuning data is of interest since it was significantly better than models with 10%-25% more training data as seen in Figure 4 . This additional data spent for tuning appears to be worth the cost.",
"cite_spans": [],
"ref_spans": [
{
"start": 329,
"end": 336,
"text": "Table 3",
"ref_id": null
},
{
"start": 474,
"end": 481,
"text": "Table 3",
"ref_id": null
},
{
"start": 977,
"end": 985,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.1.5"
},
{
"text": "The selection of the test data seems to have caused a difference in our results. While all our ensemble SVM parsings system have better UAS scores, it is a lower increase when we only use 5% for testing. Which in our treebank means we are only using 5 sentences randomly selected per experiment. This does not seem to be enough to judge the improvement. Table 3 : Average increases and decreases in UAS score for different Training-Tuning-Test samples. The average was calculated over all 7 models while the best was selected for each data split. Each experiment was sampled 100 times and Wilcoxon Statistical Significance was calculated for our SVM model's increase/decrease over each individual model. Y = p < 0.01 and N = p \u2265 0.01 for all models in the data split Figure 5 : Process Flow for one run of our self-training system. There is one alternative scenario in which the system either does self-training with each N parser or with the ensemble SVM parser. These constitute two different experiments. For all experiments i=10 and N =7",
"cite_spans": [],
"ref_spans": [
{
"start": 354,
"end": 361,
"text": "Table 3",
"ref_id": null
},
{
"start": 767,
"end": 775,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "6 Self-training",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "The following methodology was run 12 independent times. Each time new testing/tuning/and training datasets were randomly selected without replacement. In each iteration the SVM classifier and dependency models were retrained using self-training. Also for each of the 12 experiments, new random self-training datasets were selected from the larger corpus. The results in the next section are averaged amongst these 12 independent runs. Figure 5 shows this process flow for one run of this experiment. The data for self-training is also taken from IDENTIC and it consists of 45,000 sentences. The data does not have any dependency relation information but it is enriched with POS tags. It is processed with the same morphology tools as the training data described in section 3 but without the manual disambiguation and correction. This data and its annotation information are available on the IDENTIC homepage 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 435,
"end": 443,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "6.1"
},
{
"text": "For self-training we present two scenarios. First, all parsing models are retrained with their own pre-1 http://ufal.mff.cuni.cz/ larasati/identic/ dicted output. Second, all parsing models are retrained with the output of our SVM ensemble parser. Self-training in both cases is done of 10 iterations of 20 sentences. Sentences are chosen at random from unannotated data. This allows us to examine selftraining to a training data size of twice the original set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "6.1"
},
{
"text": "The next section examines the differences between these two approaches and the effect on the overall parse. Figure 6 : We can see that the self-trained Malt Parser 2Planar model that is trained with the ensemble output consistently outperforms the self-trained model that uses its own output. Results are graphed over the 10 selftraining iterations As can be seen in Figure 6 , the base models did better when trained with additional data that was parsed by our SVM ensemble system. The higher UAS accuracy seems to of had a better effect then receiving dependency structures of a similar nature to the current model. We show the 2Planar model in Figure 6 but this was the case for each of the 7 individual models. On an interesting note, the SVM system had least improvement, 0.60%, when the component base models were trained on its own output. This seems warranted as other parser combination papers have shown that ensemble systems prefer models which differ more so that a clearer decision can be made Green and\u017dabokrtsk\u00fd, 2012) . The improvements when self-training on our SVM output over the individual parsers' output can be seen in Table 3 . Again these are averages over 12 runs of the system, each run containing 10 self-training loops of 20 additional 143 sentences.",
"cite_spans": [
{
"start": 1007,
"end": 1033,
"text": "Green and\u017dabokrtsk\u00fd, 2012)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 108,
"end": 116,
"text": "Figure 6",
"ref_id": null
},
{
"start": 367,
"end": 375,
"text": "Figure 6",
"ref_id": null
},
{
"start": 647,
"end": 655,
"text": "Figure 6",
"ref_id": null
},
{
"start": 1141,
"end": 1148,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "6.1"
},
{
"text": "% Improvement % 2planar",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "1.10% nivreeager 0.40% nivrestandard 1.62% planar 0.87% stackeager 2.28% stacklazy 2.20% stackproj 1.95% svm 0.60% Table 4 : The % Improvement of all our parsing models including our ensemble svm algorithm over 12 complete iterations of the experiment.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 122,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We have shown a successful implementation of self-training for dependency parsing on an underresourced language. Self-training in order to improve our parsing accuracy can be used to help semisupervised annotation of additional data. We show this for an initial data set of 100 sentences and an additional self-trained data set of 200 sentences. We introduce and show a collaborative SVM classifier that creates an ensemble parse tree from the predicted annotations and improves individual accuracy on average of 4.92%. This additional accuracy can release some of the burden on annotators for under-resourced language annotation who would use a dependency parser as a pre-annotation tool. Using these semi-supervised annotation techniques should be applicable to many languages since the SVM classifier is essentially blind to the language and only considers the models' agreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The treebank is the first of its kind for the Indonesian language. Additionally all sentences and annotations are being made available publicly online. We have described the beginnings of the Indonesian dependency treebank. Characteristics of the sentences and dependency structure have been described.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "The research leading to these results has received funding from the European Commission's 7th Framework Program under grant agreement n \u2022 238405 (CLARA), by the grant LC536 Centrum Komputa\u010dn\u00ed Lingvistiky of the Czech Ministry of Education, and this work uses language resources developed and/or stored and/or distributed by the LINDAT-Clarin project of the Ministry of Education of the Czech Republic (project LM2010013).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "8"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "CoNLL-X shared task on multilingual dependency parsing",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Buchholz",
"suffix": ""
},
{
"first": "Erwin",
"middle": [],
"last": "Marsi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Tenth Conference on Computational Natural Language Learning, CoNLL-X '06",
"volume": "",
"issue": "",
"pages": "149--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Buchholz and Erwin Marsi. 2006. CoNLL- X shared task on multilingual dependency parsing. In Proceedings of the Tenth Conference on Compu- tational Natural Language Learning, CoNLL-X '06, pages 149-164, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Ensemble methods in machine learning",
"authors": [
{
"first": "G",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dietterich",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the First International Workshop on Multiple Classifier Systems, MCS '00",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas G. Dietterich. 2000. Ensemble methods in ma- chine learning. In Proceedings of the First Interna- tional Workshop on Multiple Classifier Systems, MCS '00, pages 1-15, London, UK. Springer-Verlag.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Hybrid Combination of Constituency and Dependency Trees into an Ensemble Dependency Parser",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zden\u011bk\u017eabokrtsk\u00fd",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Workshop on Innovative Hybrid Approaches to the Processing of Textual Data",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Green and Zden\u011bk\u017dabokrtsk\u00fd. 2012. Hybrid Combination of Constituency and Dependency Trees into an Ensemble Dependency Parser. In Proceedings of the Workshop on Innovative Hybrid Approaches to the Processing of Textual Data, pages 19-26, Avignon, France, April. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Some initial experiments with indonesian probabilistic parsing",
"authors": [
{
"first": "R",
"middle": [
"H"
],
"last": "Gusmita",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Manurung",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2nd International MALINDO Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.H. Gusmita and R. Manurung. 2008. Some ini- tial experiments with indonesian probabilistic parsing. In Proceedings of the 2nd International MALINDO Workshop.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An ensemble model that combines syntactic and semantic clustering for discriminative dependency parsing",
"authors": [
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "Marzieh",
"middle": [],
"last": "Razavi",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Sarkar",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "710--714",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gholamreza Haffari, Marzieh Razavi, and Anoop Sarkar. 2011. An ensemble model that combines syntactic and semantic clustering for discriminative dependency parsing. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies, pages 710-714, Port- land, Oregon, USA, June. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Building a syntactically annotated corpus: The prague dependency treebank. Issues of valency and meaning",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "106--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Hajic. 1998. Building a syntactically annotated cor- pus: The prague dependency treebank. Issues of va- lency and meaning, pages 106-132.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Single Malt or Blended? A Study in Multilingual Parser Optimization",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "G\u00fclsen",
"middle": [],
"last": "Eryigit",
"suffix": ""
},
{
"first": "Be\u00e1ta",
"middle": [],
"last": "Megyesi",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "933--939",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johan Hall, Jens Nilsson, Joakim Nivre, G\u00fclsen Eryigit, Be\u00e1ta Megyesi, Mattias Nilsson, and Markus Saers. 2007. Single Malt or Blended? A Study in Mul- tilingual Parser Optimization. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 933-939.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Pengembangan lanjut pengurai struktur kalimat bahasa indonesia yang menggunakan constraint-based formalism",
"authors": [
{
"first": "Joice",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joice. 2002. Pengembangan lanjut pengurai struk- tur kalimat bahasa indonesia yang menggunakan constraint-based formalism. undergraduate thesis. Master's thesis, Faculty of Computer Science, Univer- sity of Indonesia.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Simple semi-supervised dependency parsing",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "595--603",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Pro- ceedings of ACL-08: HLT, pages 595-603, Columbus, Ohio, June. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Dependency parsing. Synthesis lectures on human language technologies",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra K\u00fcbler, Ryan McDonald, and Joakim Nivre. 2009. Dependency parsing. Synthesis lectures on hu- man language technologies. Morgan & Claypool, US.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Indonesian morphology tool (morphind): Towards an indonesian corpus. Systems and Frameworks for Computational Morphology",
"authors": [
{
"first": "Vladislav",
"middle": [],
"last": "Septina Dian Larasati",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Kubo\u0148",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeman",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "119--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Septina Dian Larasati, Vladislav Kubo\u0148, and Dan Zeman. 2011. Indonesian morphology tool (morphind): To- wards an indonesian corpus. Systems and Frameworks for Computational Morphology, pages 119-129.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Identic corpus:morphologically enriched indonesian-english parallel corpus",
"authors": [
{
"first": "Larasati",
"middle": [],
"last": "Septina Dian",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Septina Dian Larasati. 2012. Identic cor- pus:morphologically enriched indonesian-english parallel corpus.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Building a large annotated corpus of english: the Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Comput. Linguist",
"volume": "19",
"issue": "",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beat- rice Santorini. 1993. Building a large annotated cor- pus of english: the Penn Treebank. Comput. Linguist., 19:313-330, June.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "When is self-training effective for parsing?",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "561--568",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2008. When is self-training effective for parsing? In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 561- 568, Manchester, UK, August. Coling 2008 Organiz- ing Committee.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "MaltParser: A languageindependent system for data-driven dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "Atanas",
"middle": [],
"last": "Chanev",
"suffix": ""
},
{
"first": "Gulsen",
"middle": [],
"last": "Eryigit",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Svetoslav",
"middle": [],
"last": "Marinov",
"suffix": ""
},
{
"first": "Erwin",
"middle": [],
"last": "Marsi",
"suffix": ""
}
],
"year": 2007,
"venue": "Natural Language Engineering",
"volume": "13",
"issue": "2",
"pages": "95--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, Gulsen Eryigit, Sandra K\u00fcbler, Svetoslav Marinov, and Erwin Marsi. 2007. MaltParser: A language- independent system for data-driven dependency pars- ing. Natural Language Engineering, 13(2):95-135.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Tree editor tred, prague dependency treebank",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Pajas",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petr Pajas. 2000. Tree editor tred, prague depen- dency treebank, charles university, prague. See URL http://ufal. mff. cuni. cz/\u02dcpajas/tred.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Parser combination by reparsing",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL",
"volume": "",
"issue": "",
"pages": "129--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Sagae and Alon Lavie. 2006. Parser combina- tion by reparsing. In Proceedings of the Human Lan- guage Technology Conference of the NAACL, Com- panion Volume: Short Papers, pages 129-132, New York City, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Dependency parsing and domain adaptation with LR models and parser ensembles",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "1044--1050",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Sagae and Jun'ichi Tsujii. 2007. Dependency pars- ing and domain adaptation with LR models and parser ensembles. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 1044-1050, Prague, Czech Republic, June. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Ensemble models for dependency parsing: cheap and good?",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu and Christopher D. Manning. 2010. En- semble models for dependency parsing: cheap and good? In Human Language Technologies: The 2010",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "649--652",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10, pages 649-652, Stroudsburg, PA, USA. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "TectoMT: Highly Modular MT System with Tectogrammatics Used as Transfer Layer",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Zden\u011bk\u017eabokrtsk\u00fd",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Pt\u00e1\u010dek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pajas",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 3rd Workshop on Statistical Machine Translation, ACL",
"volume": "",
"issue": "",
"pages": "167--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zden\u011bk\u017dabokrtsk\u00fd, Jan Pt\u00e1\u010dek, and Petr Pajas. 2008. TectoMT: Highly Modular MT System with Tec- togrammatics Used as Transfer Layer. In Proceedings of the 3rd Workshop on Statistical Machine Transla- tion, ACL, pages 167-170.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Improving parsing accuracy by combining diverse dependency parsers",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zden\u011bk\u017eabokrtsk\u00fd",
"suffix": ""
}
],
"year": 2005,
"venue": "In: Proceedings of the 9th International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Zeman and Zden\u011bk\u017dabokrtsk\u00fd. 2005. Improving parsing accuracy by combining diverse dependency parsers. In In: Proceedings of the 9th International Workshop on Parsing Technologies.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Process Flow for one run of our SVM Ensemble system. This Process in its entirety was run 100 times for each of the 18 data set splits.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "General flow to create an Ensemble parse tree",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Surface plot of the UAS score for the tuning and training data split.",
"num": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>5 Ensemble SVM Dependency Parsing</td></tr><tr><td>5.1 Methodology</td></tr><tr><td>5.1.1 Process Flow</td></tr></table>",
"num": null,
"text": "The distribution of the Part-Of-Speech tag occurrence."
},
"TABREF2": {
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"2\">Training Parameter Model Description</td></tr><tr><td>nivreeager</td><td>Nivre arc-eager</td></tr><tr><td>nivrestandard</td><td>Nivre arc-standard</td></tr><tr><td>stackproj</td><td>Stack projective</td></tr><tr><td>stackeager</td><td>Stack eager</td></tr><tr><td>stacklazy</td><td>Stack lazy</td></tr><tr><td>planar</td><td>Planar eager</td></tr><tr><td>2planar</td><td>2-Planar eager</td></tr></table>",
"num": null,
"text": ""
},
"TABREF3": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": ""
}
}
}
}