ACL-OCL / Base_JSON /prefixC /json /cmlc /2020.cmlc-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:26:50.729452Z"
},
"title": "Evaluating a Dependency Parser on DeReKo",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Fankhauser",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IDS Mannheim Germany",
"location": {
"settlement": "Germany",
"country": "Germany"
}
},
"email": "fankhauser@ids-mannheim.de"
},
{
"first": "Bich-Ngoc",
"middle": [],
"last": "Do",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IDS Mannheim Germany",
"location": {
"settlement": "Germany",
"country": "Germany"
}
},
"email": "do@cl.uni-heidelberg.de"
},
{
"first": "Marc",
"middle": [],
"last": "Kupietz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IDS Mannheim Germany",
"location": {
"settlement": "Germany",
"country": "Germany"
}
},
"email": "kupietz@ids-mannheim.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We evaluate a graph-based dependency parser on DeReKo, a large corpus of contemporary German. The dependency parser is trained on the German dataset from the SPMRL 2014 Shared Task which contains text from the news domain, whereas DeReKo also covers other domains including fiction, science, and technology. To avoid the need for costly manual annotation of the corpus, we use the parser's probability estimates for unlabeled and labeled attachment as main evaluation criterion. We show that these probability estimates are highly correlated with the actual attachment scores on a manually annotated test set. On this basis, we compare estimated parsing scores for the individual domains in DeReKo, and show that the scores decrease with increasing distance of a domain to the training corpus.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We evaluate a graph-based dependency parser on DeReKo, a large corpus of contemporary German. The dependency parser is trained on the German dataset from the SPMRL 2014 Shared Task which contains text from the news domain, whereas DeReKo also covers other domains including fiction, science, and technology. To avoid the need for costly manual annotation of the corpus, we use the parser's probability estimates for unlabeled and labeled attachment as main evaluation criterion. We show that these probability estimates are highly correlated with the actual attachment scores on a manually annotated test set. On this basis, we compare estimated parsing scores for the individual domains in DeReKo, and show that the scores decrease with increasing distance of a domain to the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Leibniz Institute for the German Language (IDS) has been building up the German Reference Corpus DeReKo (Kupietz et al., 2010) since its foundation in the mid-1960s and maintains it continuously. Since 2004, two new releases per year have been published. These are made available to the German linguistic community via the corpus analysis platforms COSMAS II (Bodmer, 2005) and KorAP (Ba\u0144ski et al., 2013) , which allows the query and display of dependency annotations. DeReKo covers a broad spectrum of topics and text types (Kupietz et al., 2018) . The latest release DeReKo 2020-I (Leibniz-Institut f\u00fcr Deutsche Sprache, 2020) contains 46.9 billion words. The number of registered users is about 45,000.",
"cite_spans": [
{
"start": 108,
"end": 130,
"text": "(Kupietz et al., 2010)",
"ref_id": "BIBREF6"
},
{
"start": 363,
"end": 377,
"text": "(Bodmer, 2005)",
"ref_id": "BIBREF2"
},
{
"start": 388,
"end": 409,
"text": "(Ba\u0144ski et al., 2013)",
"ref_id": "BIBREF0"
},
{
"start": 530,
"end": 552,
"text": "(Kupietz et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Aims",
"sec_num": "1."
},
{
"text": "DeReKo also features many linguistic annotation layers, including 4 different morphosyntactic annotations as well as one constituency and dependency annotation. The only dependency annotation is currently provided by the Maltparser (Nivre et al., 2006) , however, based on a different dependency scheme. One of DeReKo's design principles is to distinguish between observations and interpretations. Accordingly (automatic) linguistic annotations are systematically handled as theory-dependent and potentially error-prone interpretations. DeReKo's approach to make them usable for linguistic applications is to offer several alternatives, ideally independent annotations (Belica et al., 2011) on all levels. With KorAP, users can then use the degree of agreement between alternative annotations to get an idea of the accuracy they can expect for specific queries and query combinations. By using disjunctive or conjunctive queries on annotation alternatives, users can, in addition, try to maximise recall or precision, respectively (Kupietz et al., 2017) . With this approach, the direct comparison of the average accuracy of two annotation tools or models does not play a decisive role, since normally one would add both variants anyway. However, since DeReKo is first of all very large and secondly permanently extended and improved, it is a prerequisite that an annotation tool is sufficiently performant to be applicable to DeReKo or to additional corpus text within reasonable time. This is not always the case, especially with syntactic annotations. Given this background, the evaluation criteria for dependency annotations might differ from those in other applications. Important factors are above all: 1) sufficient performance and stability of the annotation tool; 2) independence from existing annotations; 3) at least selective improvements over existing annotations 4) Adaptability to domains outside the training data",
"cite_spans": [
{
"start": 232,
"end": 252,
"text": "(Nivre et al., 2006)",
"ref_id": "BIBREF12"
},
{
"start": 669,
"end": 690,
"text": "(Belica et al., 2011)",
"ref_id": "BIBREF1"
},
{
"start": 1031,
"end": 1053,
"text": "(Kupietz et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Annotations in DeReKo",
"sec_num": null
},
{
"text": "Parser The evaluated parser is a re-implementation of the graph-based dependency parser from Dozat and Manning (2017). The parser employs several layers of bidirectional Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) units to encode the words in a sentence. These representations are then used to train two biaffine classifiers, one to predict the head of a word and the other to predict the dependency label between two words. At prediction time, the dependency head and label for each word is selected as the word and label with the highest estimates given by the classifiers. The parser is available on Github (Do, 2019) .",
"cite_spans": [
{
"start": 200,
"end": 234,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 631,
"end": 641,
"text": "(Do, 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parser and Corpora",
"sec_num": "2."
},
{
"text": "We train the parser on the German dataset of the SPMRL 2014 Shared Task (Seddah et al., 2014) with the hyperparameters recommended by the authors. The dataset contains 40,000 sentences (760000 tokens) in the training set and 5,000 sentences (81700, 97000 tokens) for both development and testing. We use the predicted POS tags provided by the shared task organizers. For some evaluations we also use external word embeddings (see Section 3.) trained on DeReKo.",
"cite_spans": [
{
"start": 72,
"end": 93,
"text": "(Seddah et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "Evaluation data As evaluation data we use a sample of release 2019-I (Leibniz-Institut f\u00fcr Deutsche Sprache, 2019) of the German Reference Corpus DeReKo with 3670 Mio tokens from 11 domains. For a breakdown see Table 3 . The corpus has been tokenized and part-of-speech tagged by the treetagger (Schmid, 1994) . Parsing the corpus on a TESLA P4 GPU (8 GB) takes about 100 hours. For comparison, parsing with Malt 1.9.2 (liblinear) takes 34 wall-clock hours (38 CPU-hours) on the same machine equipped with enough RAM and Intel Xeon Gold 6148 CPUs (at 2.40 GHz), when the corpus is processed sequentially. This means that parsing with the malt parser is much more performant, especially since it can be distributed more easily to several existing computers and cores. On the other hand, parsing with the biaffine LSTM parser is at least sufficiently performant in the case of DeReKo. By using an additional GPU, DeReKo could be parsed within less than 4 weeks.",
"cite_spans": [
{
"start": 295,
"end": 309,
"text": "(Schmid, 1994)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 211,
"end": 218,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "As basic measures for parsing accuracy we use unlabeled and labeled attachment scores, UAS and LAS. UAS gives the percentage of dependency relations with the correct head and dependent, and LAS the percentage of correctly attached and labelled dependencies. In addition, we also look at the attachment estimates given by the two biaffine classifiers of the parser (see Equations 2 and 3 in Dozat and Manning (2017)). The estimates for the head of a dependency (unlabeled attachment estimate, UAE) and for its label (independent labeled attachment estimate, ILAE) are independent. Thus we calculate the labeled attachment estimate LAE as the product of UAE and ILAE. Table 1 compares the attachment scores and estimates for different embeddings on the test set. For Spmrl embeddings we have experimented with embedding dimensions 100 and 200, for DeReKo embeddings we have used 200 dimensions throughout. The internal Spmrl embeddings are trained as part of the parser training process, the DeReKo embeddings have been trained using the structured skip gram approach introduced in (Ling et al., 2015) on the complete DeReKo-2017-I corpus (Institut f\u00fcr Deutsche Sprache, 2017) consisting of over 30 billion tokens. DeReKo1 uses the embeddings for the most frequent 100.000 words, DeReKo2 and DeReKo5 the most frequent 200.000 and most frequent 500.000 words respectively. The best overall scores are achieved with DeReKo2 leading to an improvement of about 0.5% in UAS and 0.8% in LAS w.r.t. the baseline of Spmrl without external embeddings. Taking into account a larger vocabulary (DeReKo5) does not improve the scores, nor does concatenating the internal embeddings of the parser with the DeReKo embeddings DRK2+Spmrl.",
"cite_spans": [],
"ref_spans": [
{
"start": 666,
"end": 673,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Overall Accuracy",
"sec_num": "3."
},
{
"text": "Comparing the scores with the parsers' estimates along varying embeddings also shows that they are highly correlated with the spearman rank correlation coefficient \u03c1 = 0.89 between UAS and UAE, and \u03c1 = 0.94 between LAS and LAE. All further evaluations use the model with the best scores DeReKo2. Figure 1 plots the attachment scores against the attachment estimates between 75% and 100% in bins of 1%, i.e., the value at 99% estimate is the average score of all attachments with an estimate between 99% and 100%, and so on, and estimates smaller than 75% are bundled together with an average score of about 50%. Blue boxes stand for UAS and red circles for the LAS. Also from this perspective, the estimates strongly correlate with the scores. However, the estimates are typically overly confident. For the about 70% (63%) of attachments with an unlabeled (labeled) estimate \u2265 99% we get 99.79% UAS and 99.84% LAS. For the about 15% attachments with estimates between 98% and 99%, UAS and LAS are at about 96%. For lower estimates the difference between estimate and actual score increases. Nevertheless, the estimates predict the actual scores rather well, with Spearman's \u03c1 = 0.94 for UAE vs. UAS, and \u03c1 = 0.99 for LAE vs. LAS. Table 2 breaks down scores and estimates by dependency label 1 . Prob gives the relative frequency of a dependency label in percent, Uerr gives the percentage of overall error for unlabeled attachment, Lerr the percentage for labeled attachment, Rec the recall and Prec the precision for labeled attachment only, not taking into account the correctness of head and dependent. In terms of individual scores, relatively rare dependencies such as Parataxes or Appositions perform worst. However, the frequency Prob of dependencies does not seem to have a strong influence on score, \u03c1 = \u22120.05 for UAS vs. Prob, and \u03c1 = 0.42 for LAS vs. Prob. In terms of contribution to the overall error, Modifier (MO), Modifier of NP to the right (MNR), and Punctuation (X..) account for more than 50%. MO is often mislabelled as MNR or Object Preposition (OP) and vice versa, which typically also assigns the head incorrectly, as evident by the rather low UAS of 88%. Punctuation is virtually never confused with other labels, its score of 91% is almost exclusively due to incorrect head or dependent attachments. In terms of recall, rare dependencies such as Vocative (VO), Reported Speech (RS), and Object Genitive (OG) stand out, e.g. only 1 out of 15 occurrences of Vocative is correctly labeled, and less than half of RS and OG. Also, rare dependencies tend to depict low precision. Comparing scores with estimates broken down by dependency label again reveals a rather strong correlation of \u03c1 = 0.89 for unlabeled and \u03c1 = 0.75 for labeled attachments.",
"cite_spans": [],
"ref_spans": [
{
"start": 296,
"end": 304,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1230,
"end": 1237,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Scores vs. Estimates",
"sec_num": null
},
{
"text": "Having established attachment estimates as a fairly reliable predictor for attachment scores, we can derive estimates for Dereko for which we do not have any test data. Table 3 breaks down estimates by domain, sorted by UAE. It can be seen that domains that are close to the news domain, for which the parser has been trained, such as politics, finance, and health achieve the best overall estimates. In contrast, domains, such as fiction, culture, and sports depict significantly lower estimates. Table 3 : Attachment estimates by domain One way to measure the distance between domains w.r.t. to dependencies is to compare their distributions over dependency labels. JS_dep gives the Jensen-Shannon Divergence ( * 100) between the dependency distributions of the individual domains in DeReKo and the Spmrl training corpus. The closest is politics, and the most distant is fiction. Indeed, we can observe a strong negative correlation between UAE and JS_dep of \u22120.92 (Pearson) and LAE and JS_dep of \u22120.84. These findings are corroborated by the likewise fairly strong negative correlations between attachment estimates and JS_pos the JS divergence measured on the partof-speech distributions; \u22120.48 for UAE and \u22120.84 for LAE.",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 176,
"text": "Table 3",
"ref_id": null
},
{
"start": 498,
"end": 505,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Domain Dependence",
"sec_num": "5."
},
{
"text": "We have presented an evaluation of a graph-based dependency parser on a large corpus of contemporary German for which no manually labelled test set is available. To this end, we have analyzed the correlation between actual attachment scores measured on the SPMRL test set with the parser's attachment estimates, and shown that they are highly correlated along variations in pretrained word embeddings (Table 1), as well as along the different kinds of dependencies (Table 2) . On this basis, we have shown that the parser's attachment estimates are consistently domain dependent, with estimates varying up to 3% depending on distance of the domain to the training set. This suggests that it may be fruitful to experiment with domain adaptation techniques such as (Yu et al., 2015) in order to improve scores. For future work, we plan to systematically compare scores and estimates with the Malt parser. Depending on the results, we plan to apply the parser to the entire DeReKo in one of the upcoming releases and make the new dependency annotation layer available to German linguistics for research and analysis via KorAP. ",
"cite_spans": [
{
"start": 763,
"end": 780,
"text": "(Yu et al., 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 465,
"end": 474,
"text": "(Table 2)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Summary",
"sec_num": "6."
},
{
"text": "The SPMRL 2014 Shared Task for German uses the dependency scheme adopted bySeeker and Kuhn (2012)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "KorAP: the new corpus analysis platform at IDS Mannheim",
"authors": [
{
"first": "P",
"middle": [],
"last": "Ba\u0144ski",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bingel",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Diewald",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Frick",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hanl",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kupietz",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pezik",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Schnober",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Witt",
"suffix": ""
}
],
"year": 2013,
"venue": "Human Language Technologies as a Challenge for Computer Science and Linguistics. Proceedings of the 6th Language and Technology Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ba\u0144ski, P., Bingel, J., Diewald, N., Frick, E., Hanl, M., Kupietz, M., Pezik, P., Schnober, C., and Witt, A. (2013). KorAP: the new corpus analysis platform at IDS Mannheim. In Vetulani, Z. and Uszkoreit, H., editors, Human Language Technologies as a Challenge for Com- puter Science and Linguistics. Proceedings of the 6th Language and Technology Conference, Pozna\u0144. Fundacja Uniwersytetu im. A. Mickiewicza.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The morphosyntactic annotation of DeReKo: Interpretation, opportunities and pitfalls",
"authors": [
{
"first": "C",
"middle": [],
"last": "Belica",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kupietz",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "L\u00fcngen",
"suffix": ""
},
{
"first": "A",
"middle": [
"; M"
],
"last": "Witt",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kubczak",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Mair",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "\u0160ticha",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Wassner",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "451--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Belica, C., Kupietz, M., L\u00fcngen, H., and Witt, A. (2011). The morphosyntactic annotation of DeReKo: Interpreta- tion, opportunities and pitfalls. In Konopka, M., Kubczak, J., Mair, C., \u0160ticha, F., and Wassner, U., editors, Se- lected contributions from the conference Grammar and Corpora 2009, pages 451-471, T\u00fcbingen. Gunter Narr Verlag.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "COSMAS II. Recherchieren in den Korpora des IDS. Sprachreport, 3/2005",
"authors": [
{
"first": "F",
"middle": [],
"last": "Bodmer",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "2--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bodmer, F. (2005). COSMAS II. Recherchieren in den Ko- rpora des IDS. Sprachreport, 3/2005:2-5.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Theano biaffine dependency parser",
"authors": [
{
"first": "B.-N",
"middle": [],
"last": "Do",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Do, B.-N. (2019). Theano biaffine dependency parser. https://github.com/bichngocdo/theano-biaffine-parser. Accessed 2020-02-20.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Deep biaffine attention for neural dependency parsing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dozat, T. and Manning, C. D. (2017). Deep biaffine at- tention for neural dependency parsing. In 5th Interna- tional Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Long short-term memory",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The German Reference Corpus DeReKo: A Primordial Sample for Linguistic Research",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kupietz",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Belica",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Keibel",
"suffix": ""
},
{
"first": "A",
"middle": [
"; N"
],
"last": "Witt",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Choukri",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Maegaard",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mariani",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Odjik",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Piperidis",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rosner",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Tapias",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "1848--1854",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kupietz, M., Belica, C., Keibel, H., and Witt, A. (2010). The German Reference Corpus DeReKo: A Primor- dial Sample for Linguistic Research. In Calzolari, N., Choukri, K., Maegaard, B., Mariani, J., Odjik, J., Piperidis, S., Rosner, M., and Tapias, D., edi- tors, Proceedings of the Seventh conference on Interna- tional Language Resources and Evaluation (LREC'10), pages 1848-1854, Valletta/Paris. European Language Re- sources Association (ELRA). http://www.lrec-conf.org/ proceedings/lrec2010/pdf/414_Paper.pdf.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "M\u00f6glichkeiten der Erforschung grammatischer Variation mithilfe von KorAP",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kupietz",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Diewald",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hanl",
"suffix": ""
},
{
"first": "Margaretha",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "Grammatische Variation. Empirische Zug\u00e4nge und theoretische Modellierung",
"volume": "",
"issue": "",
"pages": "319--329",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kupietz, M., Diewald, N., Hanl, M., and Margaretha, E. (2017). M\u00f6glichkeiten der Erforschung grammatischer Variation mithilfe von KorAP. In Konopka, M. and W\u00f6ll- stein, A., editors, Grammatische Variation. Empirische Zug\u00e4nge und theoretische Modellierung, pages 319-329.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The German Reference Corpus DeReKo: New Developments -New Opportunities",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kupietz",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "L\u00fcngen",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kamocki",
"suffix": ""
},
{
"first": "A",
"middle": [
"; N"
],
"last": "Witt",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Choukri",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cieri",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Declerck",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Goggi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Hasida",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Isahara",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Maegaard",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mariani",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Mazo",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Moreno",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Odijk",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Piperidis",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Tokunaga",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC'18)",
"volume": "",
"issue": "",
"pages": "4353--4360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kupietz, M., L\u00fcngen, H., Kamocki, P., and Witt, A. (2018). The German Reference Corpus DeReKo: New Develop- ments -New Opportunities. In Calzolari, N., Choukri, K., Cieri, C., Declerck, T., Goggi, S., Hasida, K., Isahara, H., Maegaard, B., Mariani, J., Mazo, H., Moreno, A., Odijk, J., Piperidis, S., and Tokunaga, T., editors, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC'18), pages 4353-4360, Miyazaki/Paris. European Language Resources Associ- ation (ELRA). http://www.lrec-conf.org/proceedings/ lrec2018/pdf/737.pdf.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Two/too simple adaptations of word2vec for syntax problems",
"authors": [],
"year": null,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics -Human Language Technologies (NAACL HLT 2015)",
"volume": "",
"issue": "",
"pages": "1299--1304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Two/too simple adaptations of word2vec for syntax prob- lems. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics -Human Language Technologies (NAACL HLT 2015), pages 1299--1304, Denver, CO.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Maltparser: A data-driven parser-generator for dependency parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2006,
"venue": "LREC",
"volume": "6",
"issue": "",
"pages": "2216--2219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nivre, J., Hall, J., and Nilsson, J. (2006). Maltparser: A data-driven parser-generator for dependency parsing. In LREC, volume 6, pages 2216-2219.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Probabilistic part-of-speech tagging using decision trees",
"authors": [
{
"first": "H",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 1994,
"venue": "International Conference on New Methods in Language Processing",
"volume": "",
"issue": "",
"pages": "44--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schmid, H. (1994). Probabilistic part-of-speech tagging us- ing decision trees. In International Conference on New Methods in Language Processing, pages 44-49, Manch- ester, UK. https://www.cis.uni-muenchen.de/~schmid/ tools/TreeTagger/data/tree-tagger1.pdf.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Introducing the SPMRL 2014 shared task on parsing morphologically-rich languages",
"authors": [
{
"first": "D",
"middle": [],
"last": "Seddah",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages",
"volume": "",
"issue": "",
"pages": "103--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seddah, D., K\u00fcbler, S., and Tsarfaty, R. (2014). In- troducing the SPMRL 2014 shared task on parsing morphologically-rich languages. In Proceedings of the First Joint Workshop on Statistical Parsing of Morpho- logically Rich Languages and Syntactic Analysis of Non- Canonical Languages, pages 103-109, Dublin, Ireland. Dublin City University.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Making ellipses explicit in dependency conversion for a German treebank",
"authors": [
{
"first": "W",
"middle": [],
"last": "Seeker",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "3132--3139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seeker, W. and Kuhn, J. (2012). Making ellipses explicit in dependency conversion for a German treebank. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 3132-3139, Istanbul, Turkey. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Domain adaptation for dependency parsing via self-training",
"authors": [
{
"first": "J",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Elkaref",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Bohnet",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 14th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu, J., Elkaref, M., and Bohnet, B. (2015). Domain adap- tation for dependency parsing via self-training. In Pro- ceedings of the 14th International Conference on Parsing Technologies, pages 1-10, Bilbao, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "German Reference Corpus DeReKo-2017-I",
"authors": [],
"year": 2017,
"venue": "Language Resource References Institut f\u00fcr Deutsche Sprache",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Language Resource References Institut f\u00fcr Deutsche Sprache (2017). German Refer- ence Corpus DeReKo-2017-I. PID: http://hdl.handle.net/ 10932/00-0373-23CD-C58F-FF01-3.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "German Reference Corpus DeReKo-2019-I",
"authors": [
{
"first": "",
"middle": [],
"last": "Leibniz-Institut F\u00fcr Deutsche Sprache",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leibniz-Institut f\u00fcr Deutsche Sprache (2019). German Ref- erence Corpus DeReKo-2019-I. PID: http://hdl.handle. net/10932/00-04BB-AF28-4A4A-2801-5.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "German Reference Corpus DeReKo-2020-I",
"authors": [
{
"first": "",
"middle": [],
"last": "Leibniz-Institut F\u00fcr Deutsche Sprache",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leibniz-Institut f\u00fcr Deutsche Sprache (2020). German Ref- erence Corpus DeReKo-2020-I. PID: http://hdl.handle. net/10932/00-04B6-B898-AD1A-8101-4.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Attachment Estimates vs. Scores",
"num": null
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>embeddings</td><td>dim</td><td>UAS</td><td>LAS UAE</td><td>LAE</td></tr><tr><td>Spmrl</td><td colspan=\"4\">100 93.99 92.33 95.84 94.11</td></tr><tr><td>Spmrl</td><td colspan=\"4\">200 94.15 92.59 96.23 94.66</td></tr><tr><td>DeReKo1</td><td colspan=\"4\">200 94.30 93.00 97.08 95.90</td></tr><tr><td>DeReKo2</td><td colspan=\"4\">200 94.51 93.16 97.10 95.94</td></tr><tr><td>DeReKo5</td><td colspan=\"4\">200 93.98 92.50 95.88 94.40</td></tr><tr><td colspan=\"5\">DRK2+Spmrl 200 94.02 92.58 96.97 95.79</td></tr></table>",
"text": "Attachment scores and estimates for different word embeddings",
"num": null,
"html": null
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"text": "Scores and Estimates by Dependency Label",
"num": null,
"html": null
}
}
}
}