| { |
| "paper_id": "W17-0206", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T04:22:08.484182Z" |
| }, |
| "title": "Coreference Resolution for Swedish and German using Distant Supervision", |
| "authors": [ |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Wallin", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Lund University", |
| "location": { |
| "country": "Sweden" |
| } |
| }, |
| "email": "alexander@wallindevelopment.se" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Nugues", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Lund University", |
| "location": { |
| "country": "Sweden" |
| } |
| }, |
| "email": "pierre.nugues@cs.lth.se" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Coreference resolution is the identification of phrases that refer to the same entity in a text. Current techniques to solve coreferences use machine-learning algorithms, which require large annotated data sets. Such annotated resources are not available for most languages today. In this paper, we describe a method for solving coreferences for Swedish and German using distant supervision that does not use manually annotated texts. We generate a weakly labelled training set using parallel corpora, English-Swedish and English-German, where we solve the coreference for English using CoreNLP and transfer it to Swedish and German using word alignments. To carry this out, we identify mentions from dependency graphs in both target languages using handwritten rules. Finally, we evaluate the end-to-end results using the evaluation script from the CoNLL 2012 shared task for which we obtain a score of 34.98 for Swedish and 13.16 for German and, respectively, 46.73 and 36.98 using gold mentions.", |
| "pdf_parse": { |
| "paper_id": "W17-0206", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Coreference resolution is the identification of phrases that refer to the same entity in a text. Current techniques to solve coreferences use machine-learning algorithms, which require large annotated data sets. Such annotated resources are not available for most languages today. In this paper, we describe a method for solving coreferences for Swedish and German using distant supervision that does not use manually annotated texts. We generate a weakly labelled training set using parallel corpora, English-Swedish and English-German, where we solve the coreference for English using CoreNLP and transfer it to Swedish and German using word alignments. To carry this out, we identify mentions from dependency graphs in both target languages using handwritten rules. Finally, we evaluate the end-to-end results using the evaluation script from the CoNLL 2012 shared task for which we obtain a score of 34.98 for Swedish and 13.16 for German and, respectively, 46.73 and 36.98 using gold mentions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Coreference resolution is the process of determining whether two expressions refer to the same entity and linking them in a body of text. The referring words and phrases are generally called mentions. Coreference resolution is instrumental in many language processing applications such as information extraction, the construction of knowledge graphs, text summarizing, question answering, etc.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As most current high-performance coreference solvers use machine-learning techniques and su-pervised training (Clark and Manning, 2016) , building solvers requires large amounts of texts, hand-annotated with coreference chains. Unfortunately, such corpora are expensive to produce and are far from being available for all the languages, including the Nordic languages.", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 135, |
| "text": "(Clark and Manning, 2016)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the case of Swedish, there seems to be only one available corpus annotated with coreferences: SUC-Core (Nilsson Bj\u00f6rkenstam, 2013) , which consists of 20,000 words and 2,758 coreferring mentions. In comparison, the CoNLL 2012 shared task (Pradhan et al., 2012) uses a training set of more than a million word and 155,560 coreferring mentions for the English language alone.", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 133, |
| "text": "(Nilsson Bj\u00f6rkenstam, 2013)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 241, |
| "end": 263, |
| "text": "(Pradhan et al., 2012)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Although models trained on large corpora do not automatically result in better solver accuracies, the two orders of magnitude difference between the English CoNLL 2012 corpus and SUC-Core has certainly consequences on the model quality for English. Pradhan et al. (2012) posited that larger and more consistent corpora as well as a standardized evaluation scenario would be a way to improve the results in coreference resolution. The same should apply to Swedish. Unfortunately, annotating 1,000,000 words by hand requires seems to be out of reach for this language for now.", |
| "cite_spans": [ |
| { |
| "start": 249, |
| "end": 270, |
| "text": "Pradhan et al. (2012)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we describe a distant supervision technique to train a coreference solver for Swedish and other languages lacking large annotated corpora. Instead of using SUC-Core to train a model, we used it for evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Distant supervision is a form of supervised learning, though the term is sometimes used interchangeably with weak supervision and self training depending on the source (Mintz et al., 2009; Yao et al., 2010) . The primary difference between distant supervision and supervised learning lies in the annotation procedure of the training data; supervised learning uses labelled data, often obtained through a manual annotation, whereas in the case of distant supervision, the annotation is automatically generated from another source than the training data itself.", |
| "cite_spans": [ |
| { |
| "start": 168, |
| "end": 188, |
| "text": "(Mintz et al., 2009;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 189, |
| "end": 206, |
| "text": "Yao et al., 2010)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distant Supervision", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Training data can be generated using various methods, such as simple heuristics or from the output of another model. Distant supervision will often yield models that perform less well than models using other forms of supervised learning (Yao et al., 2010) . The advantage of distant supervision is that the training set does not need an initial annotation. Distant supervision covers a wide range of methods. In this paper, we used an annotation projection, where the output of a coreference resolver is transferred across a parallel corpus, from English to Swedish and English to German, and used as input for training a solver in the target language (Martins, 2015; Exner et al., 2015) .", |
| "cite_spans": [ |
| { |
| "start": 237, |
| "end": 255, |
| "text": "(Yao et al., 2010)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 652, |
| "end": 667, |
| "text": "(Martins, 2015;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 668, |
| "end": 687, |
| "text": "Exner et al., 2015)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distant Supervision", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Parallel corpora have been used to transfer syntactic annotation. Hwa et al. (2005) is an example of this. In the case of coreference, Rahman and Ng (2012) used statistical machine translation to align words and sentences and transfer annotated data and other entities from one language to another. They collected a large corpus of text in Spanish and Italian, translating each sentence using machine translation, applying a coreference solver on the generated text, and aligning the sentences using unsupervised machine translation methods.", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 83, |
| "text": "Hwa et al. (2005)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 135, |
| "end": 155, |
| "text": "Rahman and Ng (2012)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Martins (2015) developed a coreference solver for Spanish and Portuguese using distant supervision, where he transferred entity mentions from English to a target language using machinelearning techniques.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this paper, we describe a new projection method, where we use a parallel corpus similarly to Hwa et al. (2005) and, where we follow the methods and metrics described by Rahman and Ng (2012) . We also reused the maximum span heuristic in Martins (2015) and the pruning of documents according to the ratio between correct and incorrect entity alignments.", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 113, |
| "text": "Hwa et al. (2005)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 172, |
| "end": 192, |
| "text": "Rahman and Ng (2012)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our goal was to create a coreference solver for Swedish and German with no labelled data to train the model. Swedish has no corpora of sufficient size to train a general coreference solver, whereas German has a large labelled corpus in the form of T\u00fcba D/Z (Henrich and Hinrichs, 2014) . Although we could have trained a solver from the T\u00fcba D/Z dataset, we applied the same projection methods to German to determine if our method would generalize beyond Swedish.", |
| "cite_spans": [ |
| { |
| "start": 257, |
| "end": 285, |
| "text": "(Henrich and Hinrichs, 2014)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Overview", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We generated weakly labelled data using a parallel corpus consisting of sentence-aligned text with a sentence mapping from English to Swedish and English to German. We annotated the English text using a coreference solver for English and we transferred the coreference chains to the target language by word alignment. We then used the transferred coreference chains to train coreference solvers for the target languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Overview", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We used three language-dependent processing pipelines:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Processing Pipelines", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 We applied Stanford's CoreNLP (Manning et al., 2014) to annotate the English part. We used the parts of speech, dependency graphs, and coreference chains;", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 54, |
| "text": "(Manning et al., 2014)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Processing Pipelines", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 Mate Tools (Bj\u00f6rkelund et al., 2010) for German;", |
| "cite_spans": [ |
| { |
| "start": 13, |
| "end": 38, |
| "text": "(Bj\u00f6rkelund et al., 2010)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Processing Pipelines", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 For Swedish, we used Stagger (\u00d6stling, 2013) for the parts of speech and MaltParser for the dependencies (Nivre et al., 2007) .", |
| "cite_spans": [ |
| { |
| "start": 107, |
| "end": 127, |
| "text": "(Nivre et al., 2007)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Processing Pipelines", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "As annotation and evaluation framework, we followed the CoNLL 2011 and 2012 shared tasks (Pradhan et al., 2011; Pradhan et al., 2012) . These tasks evaluated coreference resolution systems for three languages: English, Arabic, and Chinese. To score the systems, they defined a set of metrics as well as a script that serves as standard in the field. We carried out the evaluation for both Swedish and German with this CoNLL script. For Swedish, we used SUC-Core (Nilsson Bj\u00f6rkenstam, 2013) as a test set, while for German, we used the T\u00fcba-D/Z corpus in the same manner as with SUC-Core (Henrich and Hinrichs, 2013; Henrich and Hinrichs, 2014) .", |
| "cite_spans": [ |
| { |
| "start": 89, |
| "end": 111, |
| "text": "(Pradhan et al., 2011;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 112, |
| "end": 133, |
| "text": "Pradhan et al., 2012)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 587, |
| "end": 615, |
| "text": "(Henrich and Hinrichs, 2013;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 616, |
| "end": 643, |
| "text": "Henrich and Hinrichs, 2014)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "As parallel corpora, we used the Europarl corpus (Koehn, 2005) , consisting of protocols and articles from the EU parliament gathered from 1996 in 21 language pairs.", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 62, |
| "text": "(Koehn, 2005)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Europarl", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Europarl is a large sentence-aligned unannotated corpus consisting of both text documents and web data in the XML format. Each language pair has alignment files to map the respective sentences in the different languages. We only used the text documents in this study and we removed unaligned sentences. Koehn (2005) evaluated the Europarl corpus using the BLEU metric (Papineni et al., 2002) . High BLEU scores are preferable as they often result in better word alignments (Yarowsky et al., 2001) .", |
| "cite_spans": [ |
| { |
| "start": 303, |
| "end": 315, |
| "text": "Koehn (2005)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 368, |
| "end": 391, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 473, |
| "end": 496, |
| "text": "(Yarowsky et al., 2001)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Europarl", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The BLEU values for Europarl ranged from 10.3 to 40.2, with the English-to-Swedish at 24.8 and English-to-German at 17.6, where 0 means no alignment and 100 means a perfect alignment. Additionally, Ahrenberg (2010) notes that the English-Swedish alignment of Europarl contains a high share of structurally complex relations, which makes word alignment more difficult.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Europarl", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "To carry out the transfer of entity mentions, we aligned the sentences and the words of the parallel corpora, where English was the source language and Swedish and German, the target languages. Europarl aligns the documents and sentences using the Gale and Church algorithm. This introduces additional errors when aligning the words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Alignment", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Instead, we used the precomputed word alignments from the open parallel corpus, OPUS, where improper word alignments are mitigated (Lee et al., 2010; Tiedemann, 2012) . The word alignments in OPUS use the phrase-based grow-diagfinal-and heuristic, which gave better results. Additionally, many of the challenges in aligning English to Swedish described by Ahrenberg (2010) appeared to be mitigated.", |
| "cite_spans": [ |
| { |
| "start": 131, |
| "end": 149, |
| "text": "(Lee et al., 2010;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 150, |
| "end": 166, |
| "text": "Tiedemann, 2012)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Alignment", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "From the word alignment, we carried out the mention transfer. We used a variation of the maximum span heuristic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bilingual Mention Alignment", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Bilingual word alignment is complicated even under ideal circumstances as modeling errors, language differences, and slight differences in meaning may all affect the word alignment negatively. Figure 1 shows two examples of good and bad projections from Yarowsky et al. (2001) . The figures describe two projection scenarios with varying levels of complexity from a source language on the top of the figures to a target language at the bottom. The solid lines correspond to word alignments while the dotted lines define the boundaries of their maximum span heuristic. Yarowsky et al. (2001) argue that even though individual word alignments are incorrect, a group of words corresponding to a noun phrase in the source language tends to be aligned with another group in the target language. The largest span of aligned words from a noun phrase in the target language usually corresponds to the original noun phrase in the source language.", |
| "cite_spans": [ |
| { |
| "start": 254, |
| "end": 276, |
| "text": "Yarowsky et al. (2001)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 568, |
| "end": 590, |
| "text": "Yarowsky et al. (2001)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 193, |
| "end": 201, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Maximal Span Heuristic", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Following Yarowsky et al. (2001) , the maximal span heuristic is to discard any word alignment not mapped to the largest continuous span of the target language and discard overlapping alignments, where one mention is not bounded by the other mentions for each mention.", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 32, |
| "text": "Yarowsky et al. (2001)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximal Span Heuristic", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "The heuristic is nontrivial to evaluate and we primarily selected it for its simplicity, as well as its efficiency with coreference solvers for Spanish and Portuguese using distant supervision (Martins, 2015).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximal Span Heuristic", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "The maximum span heuristic uses no syntactic knowledge other than tokenization for the target language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Span Optimal Mention", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We implemented a variation of the maximum span heuristic which utilizes syntactic knowledge of the target language. We selected the largest mention bounded by each maximum span instead of the maximum span itself. As result, the generated corpus would only consist of valid mentions rather than brackets of text without any relation to a mention. This has the additional benefit of simplifying overlapping spans as a mention has a unique head and the problem of overlapping is replaced with pruning mentions with identical bounds.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Span Optimal Mention", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We removed some documents in the corpus from the generated data set according to two metrics: The document length and alignment agreement as in Martins (2015).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document Pruning", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "The goal was to create a training set with comparable size to the CoNLL task, i.e. a million", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document Pruning", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "[J 1 N 1 ] VBD [N 2 N 2 ] IN [N 3 ] [DT (1) N (1) J (1) ] VBD [N (2) de N (2) ] DT [N (3) ] / 0 [DT 1 J 1 N 1 ] VBD [N 2 N 2 ] [DT (1) N (1) ] VBD [N (2) } J (1) { de N (2) ] / 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document Pruning", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Figure 1: Left: Standard projection scenario according to Yarowsky et al. (2001) ; Right: Problematic projection scenario words or more. To this effect, we aligned all the documents using the maximum span variant and we measured the alignment accuracy defined as the number of accepted alignments divided by the sum of all alignments.", |
| "cite_spans": [ |
| { |
| "start": 58, |
| "end": 80, |
| "text": "Yarowsky et al. (2001)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document Pruning", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "We removed all the documents with lower than average alignment accuracy. Additionally, larger documents were removed until we could generate a total training set consisting of approximately a million words in total.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document Pruning", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "There are multiple metrics to evaluate coreference resolution. We used the CoNLL 2012 score as it consists of a single value (Pradhan et al., 2012) . This score is the mean of three other metrics: MUC6 (Vilain et al., 1995) , B 3 (Bagga and Baldwin, 1998), and CEAF E (Luo, 2005) . We also report the values we obtained with CEAF M , and BLANC (Recasens and Hovy, 2011).", |
| "cite_spans": [ |
| { |
| "start": 125, |
| "end": 147, |
| "text": "(Pradhan et al., 2012)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 202, |
| "end": 223, |
| "text": "(Vilain et al., 1995)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 268, |
| "end": 279, |
| "text": "(Luo, 2005)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Metrics", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "Swedish: SUC-Core. The SUC-Core corpus (Nilsson Bj\u00f6rkenstam, 2013) consists of 20,000 words and tokens in 10 documents with 2,758 coreferring mentions. The corpus is a subset of the SUC 2.0 corpus, annotated with noun phrase coreferential links (Gustafson-Capkov\u00e1 and Hartmann, 2006) .", |
| "cite_spans": [ |
| { |
| "start": 245, |
| "end": 283, |
| "text": "(Gustafson-Capkov\u00e1 and Hartmann, 2006)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Test Sets", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "The corpus is much too small to train a coreference solver, but it is more than sufficient to evaluate solvers trained on some different source material. As a preparatory step to evaluate coreference resolution in Swedish, the information from SUC-Core was merged with SUC 2.0 and SUC 3.0 to have a CoNLL 2012 compatible file format. Additionally, we removed the singletons from the merged data files.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Test Sets", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "German: T\u00fcba D/Z. The T\u00fcba D/Z corpus (Henrich and Hinrichs, 2013; Henrich and Hinrichs, 2014) consists of 1,787,801 words and tokens organized in 3,644 files annotated with both part of speech and dependency graph information.", |
| "cite_spans": [ |
| { |
| "start": 38, |
| "end": 66, |
| "text": "(Henrich and Hinrichs, 2013;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 67, |
| "end": 94, |
| "text": "Henrich and Hinrichs, 2014)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Test Sets", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Although the corpus would be sufficient in size to train a coreference solver, we only used it for evaluation in this work. As with SUC-Core, we removed all the singletons. Due to time and memory constraints, we only used a subset of the T\u00fcba D/Z corpus for evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Test Sets", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Similarly to the CoNLL 2011 and 2012 shared tasks, we evaluated our system using gold and predicted mention boundaries. When given the gold mentions, the solver knows the boundaries of all nonsingleton mentions in the test set, while with predicted mention boundaries, the solver has no prior knowledge about the test set. We also followed the shared tasks in only using machineannotated parses as input.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "End-to-End Evaluation", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "The rationale for using gold mention boundaries is that they correspond to the use of an ideal method for mention identification, where the results are an upper bound for the solver as it does not consider singleton mentions (Pradhan et al., 2011) .", |
| "cite_spans": [ |
| { |
| "start": 225, |
| "end": 247, |
| "text": "(Pradhan et al., 2011)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "End-to-End Evaluation", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "For Swedish, we restricted the training set to the shortest documents containing at least one coreference chain. After selection and pruning, this set consisted of 4,366,897 words and 183,207 sentences in 1,717 documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Selection of Training Sets", |
| "sec_num": "8.1" |
| }, |
| { |
| "text": "For German, we extracted a training set consisting of randomly selected documents containing at least one coreference chain. After selection and pruning, the set consisted of 9,028,208 words and 342,852 sentences in 1,717 documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Selection of Training Sets", |
| "sec_num": "8.1" |
| }, |
| { |
| "text": "Swedish. The mentions in SUC-Core correspond to noun phrases. We identified them automatically from the dependency graphs produced by Maltparser using a set of rules based on the mention headwords. Table 1 shows these rules that consist of a part of speech and an additional constraint. When a rule matches the part of speech of a word, we create the mention from its yield.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 198, |
| "end": 205, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Mention Identification", |
| "sec_num": "8.2" |
| }, |
| { |
| "text": "As SUC-Core does not explicitly define the mention bracketing rules, we had to further analyze this corpus to discern basic patterns and adjust the rules to better map the mention boundaries (Table 2) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 191, |
| "end": 200, |
| "text": "(Table 2)", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Mention Identification", |
| "sec_num": "8.2" |
| }, |
| { |
| "text": "German. The identification of noun phrases in German proved more complicated than in Swedish, especially due to the split antecedents linked by a coordinating conjunction. Consider the phrase Anna and Paul. Anna, Paul, as well as the whole phrase are mentions of entities. In Swedish, the corresponding phrase would be Anna och Paul with the conjunction och as the head word. The annotation scheme used for the TIGER corpus does not have the conjunction as head for coordinated noun phrases (Albert et al., 2003) . In Swedish, the rule for identifying the same kind of split antecedents only needs to check whether a conjunction has children that were noun phrases, whereas in German the same rule required more analysis. Table 3 shows the rules for the identification of noun phrases in German, and Table 4 , the postprocessing rules.", |
| "cite_spans": [ |
| { |
| "start": 491, |
| "end": 512, |
| "text": "(Albert et al., 2003)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 722, |
| "end": 729, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 800, |
| "end": 807, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Mention Identification", |
| "sec_num": "8.2" |
| }, |
| { |
| "text": "To solve coreference, we used a variation of the closest antecedent approach described in Soon et al. (2001) . This approach models chains as a projected graph with mentions as vertices, where every mention has at most one antecedent and one anaphora. The modeling assumptions relaxes the complex relationship between coreferring mentions by only considering the relationship between a mention and its closest antecedent.", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 108, |
| "text": "Soon et al. (2001)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generating a Training Set", |
| "sec_num": "9.1" |
| }, |
| { |
| "text": "The problem is framed as a binary classification problem, where the system only needs to decide Table 3 : Hand-written rules for noun phrase identification for German based on T\u00fcba-D/Z the positive ones, which may skew the model. We limited the ratio at somewhere between 4 and 5 % and randomizing which negative samples become part of the final training set.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 96, |
| "end": 103, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Generating a Training Set", |
| "sec_num": "9.1" |
| }, |
| { |
| "text": "We used the C4.5, random forest, and logistic regression algorithms from the Weka Toolkit and LibLinear to train the models (Witten and Frank, 2005; Hall et al., 2009; Fan et al., 2008) .", |
| "cite_spans": [ |
| { |
| "start": 124, |
| "end": 148, |
| "text": "(Witten and Frank, 2005;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 149, |
| "end": 167, |
| "text": "Hall et al., 2009;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 168, |
| "end": 185, |
| "text": "Fan et al., 2008)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Machine-Learning Algorithms", |
| "sec_num": "9.2" |
| }, |
| { |
| "text": "Swedish. As features, we used a subset Stamborg et al. (2012) and Soon et al. (2001) . Table 5 shows the complete list.", |
| "cite_spans": [ |
| { |
| "start": 39, |
| "end": 61, |
| "text": "Stamborg et al. (2012)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 66, |
| "end": 84, |
| "text": "Soon et al. (2001)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 87, |
| "end": 94, |
| "text": "Table 5", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "9.3" |
| }, |
| { |
| "text": "German. The feature set for German is described in Table 5 . The primary difference between German and Swedish is the addition of gender classified names. We used the lists of names and job titles from IMS Hotcoref DE (R\u00f6siger and Kuhn, 2016) to train the German model. The morphological information from both CoreNLP and Mate Tools appeared to be limited when compared with Swedish, which is reflected in the feature set.", |
| "cite_spans": [ |
| { |
| "start": 218, |
| "end": 242, |
| "text": "(R\u00f6siger and Kuhn, 2016)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 51, |
| "end": 58, |
| "text": "Table 5", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "9.3" |
| }, |
| { |
| "text": "Swedish. The Swedish EuroParl corpus consists of 8,445 documents. From these documents, we selected a subset consisting of 3,445 documents # Rule 1 Remove words from the start or the end of the phrase if they have the POS tags $. $( PROP KON. 2 If there is a word with the POS tag VVPP after the head word the word prior to this word becomes the last word in the phrase. 3 If there is a dependant word with the POS tag KON and its string equals und create additional mentions from the phrases left and right of this word. 4 If there is a word with the POS tag APPRART after the head word the word prior to this word becomes the last word in the phrase. Table 4 : Additional hand-written rules for post processing the identified noun phrases in German based on the size, where we preferred the smaller documents.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 653, |
| "end": 660, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Mention Alignment", |
| "sec_num": "10.1" |
| }, |
| { |
| "text": "The selected documents contained in total 1,189,557 mentions that were successfully transferred and 541,608 rejected mentions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mention Alignment", |
| "sec_num": "10.1" |
| }, |
| { |
| "text": "We removed the documents with less than 70% successfully transferred mentions, which yielded a final tally of 515,777 successfully transferred mentions and 198,675 rejected mentions in 1,717 documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mention Alignment", |
| "sec_num": "10.1" |
| }, |
| { |
| "text": "German. The German EuroParl corpus consists of 8,446 documents. From these documents, we randomly selected a subset consisting of 2,568 documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mention Alignment", |
| "sec_num": "10.1" |
| }, |
| { |
| "text": "The selected documents contained in total 992,734 successfully transferred and 503,690 rejected mentions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mention Alignment", |
| "sec_num": "10.1" |
| }, |
| { |
| "text": "We removed the documents with less than 60% successfully transferred mentions, which yielded a final tally of 975,539 successfully transferred mentions and 491,009 rejected mentions in 964 documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mention Alignment", |
| "sec_num": "10.1" |
| }, |
| { |
| "text": "Swedish. Using the rules described in Table 1 , we identified 91.35% of the mentions in SUC-Core. We could improve the results to 95.82% with the additional post processing rules described in Table 2 German. Using the rules described in Table 3 , we identified 65.90% of the mentions in T\u00fcba D/Z. With the additional post processing rules described in Table 4 , we reached a percentage of 82.08%. Table 6 shows the end-to-end results when using predicted mentions and Table 7 shows the results with the same pipeline with gold mentions. These latter results correspond to the upper bound figures we could obtain with this technique with a same Table 7 : End-to-end results using gold mention boundaries feature set.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 38, |
| "end": 45, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 192, |
| "end": 199, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 237, |
| "end": 244, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 352, |
| "end": 359, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 397, |
| "end": 404, |
| "text": "Table 6", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 468, |
| "end": 475, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 644, |
| "end": 651, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Mention Identification", |
| "sec_num": "10.2" |
| }, |
| { |
| "text": "In this paper, we have described end-to-end coreference solvers for Swedish and German that used no annotated data. We used feature sets limited to simple linguistic features easily extracted from the Swedish treebank and the German Tiger corpus, respectively. A large subset of the feature set of Stamborg et al. (2012) would very likely improve the results in this work. The results in Tables 6 and 7 show that even though the dependency grammar based approach for identifying mentions yields a decent performance compared with CoNLL 2011 and 2012, a better identification and pruning procedure would probably significantly improve the results. This is manifest in German, where using the gold mentions results in a considerable increase of the scores: Table 7 shows a difference of more than 23 points compared with those in Table 6 . This demonstrates that the large difference in scores between Swedish and German has its source in the methods used for mention identification rather than in the different feature sets or the correctness of the training set. This can be explained by the difficulty to predict mentions for German, possibly because of the differences in the dependency grammar format, as relatively few mentions were identified using their head elements.", |
| "cite_spans": [ |
| { |
| "start": 298, |
| "end": 320, |
| "text": "Stamborg et al. (2012)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 388, |
| "end": 402, |
| "text": "Tables 6 and 7", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 755, |
| "end": 762, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 828, |
| "end": 835, |
| "text": "Table 6", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "11" |
| }, |
| { |
| "text": "The final results also show that classifiers based on J48 and random forests produced better scores than logistic regression. Coreference resolution using weak labelled training data from distant supervision enabled us to create coreference solvers for Swedish and German, even though the mention alignment in the parallel corpora was far from perfect. It is difficult to compare results we obtained in this article with those presented in CoNLL, as the languages and test sets are different. Despite this, we observe that when using gold mention boundaries, we reach a MELA CoNLL score for Swedish that is comparable with results obtained for Arabic in the CoNLL-2012 shared task using the similar preconditions. We believe this shows the method we proposed is viable. Our results are, however, lower than those obtained for English and Chinese in the same task and could probably be improved with a better mention detection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "11" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This research was supported by Vetenskapsr\u00e5det, the Swedish research council, under the Det digitaliserade samh\u00e4llet program.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Alignment-based profiling of Europarl data in an English-Swedish parallel corpus", |
| "authors": [ |
| { |
| "first": "Lars", |
| "middle": [ |
| "Ahrenberg" |
| ], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": ".", |
| "middle": [ |
| ";" |
| ], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Khalid", |
| "middle": [], |
| "last": "Choukri", |
| "suffix": "" |
| }, |
| { |
| "first": "Bente", |
| "middle": [], |
| "last": "Maegaard", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Mariani", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lars Ahrenberg. 2010. Alignment-based profiling of Europarl data in an English-Swedish parallel corpus. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Mike Rosner, and Daniel Tapias, editors, Proceedings of the Seventh Interna- tional Conference on Language Resources and Eval- uation (LREC'10), Valletta, Malta, may. European Language Resources Association (ELRA).", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Tiger annotationsschema", |
| "authors": [ |
| { |
| "first": "Stefanie", |
| "middle": [], |
| "last": "Albert", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Anderssen", |
| "suffix": "" |
| }, |
| { |
| "first": "Regine", |
| "middle": [], |
| "last": "Bader", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephanie", |
| "middle": [], |
| "last": "Becker", |
| "suffix": "" |
| }, |
| { |
| "first": "Tobias", |
| "middle": [], |
| "last": "Bracht", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Brants", |
| "suffix": "" |
| }, |
| { |
| "first": "Thorsten", |
| "middle": [], |
| "last": "Brants", |
| "suffix": "" |
| }, |
| { |
| "first": "Vera", |
| "middle": [], |
| "last": "Demberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefanie", |
| "middle": [], |
| "last": "Dipper", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Eisenberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stefanie Albert, Jan Anderssen, Regine Bader, Stephanie Becker, Tobias Bracht, Sabine Brants, Thorsten Brants, Vera Demberg, Stefanie Dipper, Peter Eisenberg, et al. 2003. Tiger annotationss- chema. Technical report, Universit\u00e4t des Saarlan- des.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Entitybased cross-document coreferencing using the vector space model", |
| "authors": [ |
| { |
| "first": "Amit", |
| "middle": [], |
| "last": "Bagga", |
| "suffix": "" |
| }, |
| { |
| "first": "Breck", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "79--85", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amit Bagga and Breck Baldwin. 1998. Entity- based cross-document coreferencing using the vec- tor space model. In Proceedings of the 36th An- nual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics -Volume 1, ACL '98, pages 79-85, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A high-performance syntactic and semantic dependency parser", |
| "authors": [ |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "Bj\u00f6rkelund", |
| "suffix": "" |
| }, |
| { |
| "first": "Bernd", |
| "middle": [], |
| "last": "Bohnet", |
| "suffix": "" |
| }, |
| { |
| "first": "Love", |
| "middle": [], |
| "last": "Hafdell", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Nugues", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "33--36", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anders Bj\u00f6rkelund, Bernd Bohnet, Love Hafdell, and Pierre Nugues. 2010. A high-performance syntac- tic and semantic dependency parser. In Proceedings of the 23rd International Conference on Computa- tional Linguistics: Demonstrations, pages 33-36. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Improving coreference resolution by learning entitylevel distributed representations", |
| "authors": [ |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "643--653", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kevin Clark and Christopher D. Manning. 2016. Im- proving coreference resolution by learning entity- level distributed representations. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 643-653, Berlin, Germany, August. Associ- ation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A distant supervision approach to semantic role labeling", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Exner", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcus", |
| "middle": [], |
| "last": "Klang", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Nugues", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Fourth Joint Conference on Lexical and Computational Semantics (* SEM", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter Exner, Marcus Klang, and Pierre Nugues. 2015. A distant supervision approach to semantic role la- beling. In Fourth Joint Conference on Lexical and Computational Semantics (* SEM 2015).", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "LIBLINEAR: A library for large linear classification", |
| "authors": [ |
| { |
| "first": "Kai-Wei", |
| "middle": [], |
| "last": "Rong-En Fan", |
| "suffix": "" |
| }, |
| { |
| "first": "Cho-Jui", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiang-Rui", |
| "middle": [], |
| "last": "Hsieh", |
| "suffix": "" |
| }, |
| { |
| "first": "Chih-Jen", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Journal of machine learning research", |
| "volume": "9", |
| "issue": "", |
| "pages": "1871--1874", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of machine learning research, 9(Aug):1871-1874.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Manual of the Stockholm Ume\u00e5 corpus version 2.0", |
| "authors": [ |
| { |
| "first": "Sofia", |
| "middle": [], |
| "last": "Gustafson", |
| "suffix": "" |
| }, |
| { |
| "first": "-", |
| "middle": [], |
| "last": "Capkov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "Britt", |
| "middle": [], |
| "last": "Hartmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sofia Gustafson-Capkov\u00e1 and Britt Hartmann. 2006. Manual of the Stockholm Ume\u00e5 corpus version 2.0. Technical report, Stockholm University.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "The WEKA data mining software: an update", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Eibe", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Holmes", |
| "suffix": "" |
| }, |
| { |
| "first": "Bernhard", |
| "middle": [], |
| "last": "Pfahringer", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Reutemann", |
| "suffix": "" |
| }, |
| { |
| "first": "Ian", |
| "middle": [ |
| "H" |
| ], |
| "last": "Witten", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "ACM SIGKDD explorations newsletter", |
| "volume": "11", |
| "issue": "1", |
| "pages": "10--18", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H Witten. 2009. The WEKA data mining software: an update. ACM SIGKDD explorations newsletter, 11(1):10- 18.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Extending the T\u00fcBa-D/Z treebank with GermaNet sense annotation", |
| "authors": [ |
| { |
| "first": "Verena", |
| "middle": [], |
| "last": "Henrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Erhard", |
| "middle": [], |
| "last": "Hinrichs", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Language Processing and Knowledge in the Web", |
| "volume": "", |
| "issue": "", |
| "pages": "89--96", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Verena Henrich and Erhard Hinrichs. 2013. Extending the T\u00fcBa-D/Z treebank with GermaNet sense anno- tation. In Language Processing and Knowledge in the Web, pages 89-96. Springer Berlin Heidelberg.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Consistency of manual sense annotation and integration into the T\u00fcBa-D/Z treebank", |
| "authors": [ |
| { |
| "first": "Verena", |
| "middle": [], |
| "last": "Henrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Erhard", |
| "middle": [], |
| "last": "Hinrichs", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 13th International Workshop on Treebanks and Linguistic Theories (TLT13)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Verena Henrich and Erhard Hinrichs. 2014. Con- sistency of manual sense annotation and integration into the T\u00fcBa-D/Z treebank. In Proceedings of the 13th International Workshop on Treebanks and Lin- guistic Theories (TLT13).", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Bootstrapping parsers via syntactic projection across parallel texts", |
| "authors": [ |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Hwa", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| }, |
| { |
| "first": "Amy", |
| "middle": [], |
| "last": "Weinberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Clara", |
| "middle": [], |
| "last": "Cabezas", |
| "suffix": "" |
| }, |
| { |
| "first": "Okan", |
| "middle": [], |
| "last": "Kolak", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Natural language engineering", |
| "volume": "11", |
| "issue": "03", |
| "pages": "311--325", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural language engineering, 11(03):311-325.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Europarl: A parallel corpus for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "MT summit", |
| "volume": "5", |
| "issue": "", |
| "pages": "79--86", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, vol- ume 5, pages 79-86.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A post-processing approach to statistical word alignment reflecting alignment tendency between part-of-speeches", |
| "authors": [ |
| { |
| "first": "Jae-Hee", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Seung-Wook", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Gumwon", |
| "middle": [], |
| "last": "Hong", |
| "suffix": "" |
| }, |
| { |
| "first": "Young-Sook", |
| "middle": [], |
| "last": "Hwang", |
| "suffix": "" |
| }, |
| { |
| "first": "Sang-Bum", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Hae-Chang", |
| "middle": [], |
| "last": "Rim", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters", |
| "volume": "", |
| "issue": "", |
| "pages": "623--629", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jae-Hee Lee, Seung-Wook Lee, Gumwon Hong, Young-Sook Hwang, Sang-Bum Kim, and Hae- Chang Rim. 2010. A post-processing approach to statistical word alignment reflecting alignment ten- dency between part-of-speeches. In Proceedings of the 23rd International Conference on Computa- tional Linguistics: Posters, pages 623-629. Associ- ation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "On coreference resolution performance metrics", |
| "authors": [ |
| { |
| "first": "Xiaoqiang", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "25--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiaoqiang Luo. 2005. On coreference resolution per- formance metrics. In Proceedings of the confer- ence on Human Language Technology and Empiri- cal Methods in Natural Language Processing, pages 25-32. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "The Stanford CoreNLP natural language processing toolkit", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jenny", |
| "middle": [], |
| "last": "Finkel", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [ |
| "J" |
| ], |
| "last": "Bethard", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mc-Closky", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Association for Computational Linguistics (ACL) System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "55--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Transferring coreference resolvers with posterior regularization", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "T" |
| ], |
| "last": "Andr\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Martins", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "1427--1437", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andr\u00e9 F. T. Martins. 2015. Transferring coreference resolvers with posterior regularization. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 1427-1437, Beijing, China, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Distant supervision for relation extraction without labeled data", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Mintz", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Bills", |
| "suffix": "" |
| }, |
| { |
| "first": "Rion", |
| "middle": [], |
| "last": "Snow", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
| "volume": "2", |
| "issue": "", |
| "pages": "1003--1011", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Ju- rafsky. 2009. Distant supervision for relation ex- traction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Vol- ume 2-Volume 2, pages 1003-1011. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "SUC-CORE: A balanced corpus annotated with noun phrase coreference", |
| "authors": [ |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Nilsson Bj\u00f6rkenstam", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Northern European Journal of Language Technology (NEJLT)", |
| "volume": "3", |
| "issue": "", |
| "pages": "19--39", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kristina Nilsson Bj\u00f6rkenstam. 2013. SUC-CORE: A balanced corpus annotated with noun phrase coref- erence. Northern European Journal of Language Technology (NEJLT), 3:19-39.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "MaltParser: A language-independent system for data-driven dependency parsing", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| }, |
| { |
| "first": "Atanas", |
| "middle": [], |
| "last": "Chanev", |
| "suffix": "" |
| }, |
| { |
| "first": "G\u00fclsen", |
| "middle": [], |
| "last": "Eryigit", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "" |
| }, |
| { |
| "first": "Svetoslav", |
| "middle": [], |
| "last": "Marinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Erwin", |
| "middle": [], |
| "last": "Marsi", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Natural Language Engineering", |
| "volume": "13", |
| "issue": "02", |
| "pages": "95--135", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G\u00fclsen Eryigit, Sandra K\u00fcbler, Svetoslav Marinov, and Erwin Marsi. 2007. MaltParser: A language-independent system for data-driven de- pendency parsing. Natural Language Engineering, 13(02):95-135.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Stagger: An open-source part of speech tagger for Swedish", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Robert\u00f6stling", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Northern European Journal of Language Technology (NEJLT)", |
| "volume": "3", |
| "issue": "", |
| "pages": "1--18", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert\u00d6stling. 2013. Stagger: An open-source part of speech tagger for Swedish. Northern European Journal of Language Technology (NEJLT), 3:1-18.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "BLEU: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Kishore", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th annual meeting on association for computational linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "311--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "CoNLL-2011 shared task: Modeling unrestricted coreference in ontonotes", |
| "authors": [ |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Lance", |
| "middle": [], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Mitchell", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "1--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer Pradhan, Lance Ramshaw, Mitchell Marcus, Martha Palmer, Ralph Weischedel, and Nianwen Xue. 2011. CoNLL-2011 shared task: Modeling unrestricted coreference in ontonotes. In Proceed- ings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 1- 27. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes", |
| "authors": [ |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Sameer Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Moschitti", |
| "suffix": "" |
| }, |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuchen", |
| "middle": [], |
| "last": "Uryupina", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Joint Conference on EMNLP and CoNLL-Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "1--40", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 shared task: Modeling multilingual unre- stricted coreference in ontonotes. In Joint Confer- ence on EMNLP and CoNLL-Shared Task, pages 1- 40. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Translationbased projection for multilingual coreference resolution", |
| "authors": [ |
| { |
| "first": "Altaf", |
| "middle": [], |
| "last": "Rahman", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "720--730", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Altaf Rahman and Vincent Ng. 2012. Translation- based projection for multilingual coreference reso- lution. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 720-730. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "BLANC: Implementing the rand index for coreference evaluation", |
| "authors": [ |
| { |
| "first": "Marta", |
| "middle": [], |
| "last": "Recasens", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Natural Language Engineering", |
| "volume": "17", |
| "issue": "04", |
| "pages": "485--510", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marta Recasens and Eduard Hovy. 2011. BLANC: Implementing the rand index for coreference evalu- ation. Natural Language Engineering, 17(04):485- 510.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "IMS HotCoref DE: A data-driven co-reference resolver for German", |
| "authors": [ |
| { |
| "first": "Ina", |
| "middle": [], |
| "last": "R\u00f6siger", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonas", |
| "middle": [], |
| "last": "Kuhn", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ina R\u00f6siger and Jonas Kuhn. 2016. IMS HotCoref DE: A data-driven co-reference resolver for German. In LREC.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "A machine learning approach to coreference resolution of noun phrases", |
| "authors": [], |
| "year": 2001, |
| "venue": "Computational linguistics", |
| "volume": "27", |
| "issue": "4", |
| "pages": "521--544", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning ap- proach to coreference resolution of noun phrases. Computational linguistics, 27(4):521-544.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Using syntactic dependencies to solve coreferences", |
| "authors": [ |
| { |
| "first": "Marcus", |
| "middle": [], |
| "last": "Stamborg", |
| "suffix": "" |
| }, |
| { |
| "first": "Dennis", |
| "middle": [], |
| "last": "Medved", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Exner", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Nugues", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Joint Conference on EMNLP and CoNLL-Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "64--70", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marcus Stamborg, Dennis Medved, Peter Exner, and Pierre Nugues. 2012. Using syntactic dependen- cies to solve coreferences. In Joint Conference on EMNLP and CoNLL-Shared Task, pages 64-70. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Parallel data, tools and interfaces in OPUS", |
| "authors": [ |
| { |
| "first": "J\u00f6rg", |
| "middle": [], |
| "last": "Tiedemann", |
| "suffix": "" |
| }, |
| { |
| "first": ".", |
| "middle": [ |
| ";" |
| ], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Khalid", |
| "middle": [], |
| "last": "Choukri", |
| "suffix": "" |
| }, |
| { |
| "first": "Thierry", |
| "middle": [], |
| "last": "Declerck", |
| "suffix": "" |
| }, |
| { |
| "first": "Bente", |
| "middle": [], |
| "last": "Mehmet Ugur Dogan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Maegaard", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and in- terfaces in OPUS. In Nicoletta Calzolari (Con- ference Chair), Khalid Choukri, Thierry Declerck, Mehmet Ugur Dogan, Bente Maegaard, Joseph Mar- iani, Jan Odijk, and Stelios Piperidis, editors, Pro- ceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12), Is- tanbul, Turkey, may. European Language Resources Association (ELRA).", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "A modeltheoretic coreference scoring scheme", |
| "authors": [ |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Vilain", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Burger", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Aberdeen", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the 6th conference on Message understanding", |
| "volume": "", |
| "issue": "", |
| "pages": "45--52", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Proceed- ings of the 6th conference on Message understand- ing, pages 45-52. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Data Mining: Practical machine learning tools and techniques", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ian", |
| "suffix": "" |
| }, |
| { |
| "first": "Eibe", |
| "middle": [], |
| "last": "Witten", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ian H Witten and Eibe Frank. 2005. Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Collective cross-document relation extraction without labelled data", |
| "authors": [ |
| { |
| "first": "Limin", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1013--1023", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Limin Yao, Sebastian Riedel, and Andrew McCallum. 2010. Collective cross-document relation extrac- tion without labelled data. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1013-1023. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Inducing multilingual text analysis tools via robust projection across aligned corpora", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Grace", |
| "middle": [], |
| "last": "Ngai", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Wicentowski", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the first international conference on Human language technology research", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Yarowsky, Grace Ngai, and Richard Wicen- towski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the first international conference on Human language technology research, pages 1- 8. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "", |
| "num": null |
| }, |
| "TABREF1": { |
| "content": "<table><tr><td>: Hand-written rules for noun phrase iden-</td></tr><tr><td>tification for Swedish based on SUC-Core. The</td></tr><tr><td>rules are ordered by precedence from top to bot-</td></tr><tr><td>tom</td></tr><tr><td># Description</td></tr><tr><td>1 Remove words from the beginning or the</td></tr><tr><td>end of the phrase if they have the POS tags</td></tr><tr><td>ET, EF or VB.</td></tr><tr><td>2 The first and last words closest to the men-</td></tr><tr><td>tion head with the HP POS tag and all</td></tr><tr><td>words further from the mention head is re-</td></tr><tr><td>moved from the phrase.</td></tr><tr><td>3 Remove words from the beginning or the</td></tr><tr><td>end of the phrase if they have the POS tags</td></tr><tr><td>AB or MAD.</td></tr><tr><td>4 The first and last words closest to the men-</td></tr><tr><td>tion head with the HP POS tag and with</td></tr><tr><td>the dependency arch SS and all words fur-</td></tr><tr><td>ther from the mention head is removed</td></tr><tr><td>from the phrase.</td></tr><tr><td>5 Remove words from the end of the phrase</td></tr><tr><td>if they have the POS tag PP.</td></tr><tr><td>6 Remove words from the beginning or the</td></tr><tr><td>end of the phrase if they have the POS tag</td></tr><tr><td>PAD.</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "text": "", |
| "html": null |
| }, |
| "TABREF2": { |
| "content": "<table><tr><td>: Additional hand-written rules for post</td></tr><tr><td>processing the identified noun phrases</td></tr><tr><td>whether a mention and its closest antecedent core-</td></tr><tr><td>fer (Soon et al., 2001). When generating a training</td></tr><tr><td>set, the negative examples are more frequent than</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "text": "", |
| "html": null |
| }, |
| "TABREF3": { |
| "content": "<table><tr><td>Rule Mentions are identical Mention head words are identical POS of anaphora head word is PN POS of antecedent head word is PN POS of anaphora head word is PM POS of antecedent head word is PM Anaphora head word has the morphological feat DT Boolean ! Type sv de Boolean ! ! Boolean ! ! Boolean ! Boolean ! Boolean ! Boolean ! Antecedent head grammatical article Enum ! Anaphora head grammatical article Enum ! Antecedent grammatical number Enum ! Anaphora grammatical number Enum ! Checks if mention contains a word which is a male Boolean ! first name Checks if mention contains a word which is a female Boolean ! first name Checks if mention contains a word which is a job Boolean ! title Checks if mention contains a word which is a male Boolean ! first name Checks if mention contains a word which is a female Boolean ! first name Checks if mention contains a word which is a job Boolean ! title Number of intervening sentences between the two Integer ! mentions. Max. 10. Grammatical gender of antecedent head word Enum ! ! Grammatical gender of anaphora head word Enum ! ! Anaphora head is subject Enum ! Antecedent head is subject Enum ! Anaphora has the morphological feature gen Enum ! Antecedent has the morphological feature gen Enum ! Anaphora has the morphological feature ind Enum ! Antecedent has the morphological feature ind Enum ! Anaphora has the morphological feature nom Enum ! Antecedent has the morphological feature nom Enum ! Anaphora has the morphological feature sg Enum ! Antecedent has the morphological feature sg Enum</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "text": ".", |
| "html": null |
| }, |
| "TABREF4": { |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table", |
| "text": "The feature set used for Swedish (sv) andGerman (de)", |
| "html": null |
| }, |
| "TABREF6": { |
| "content": "<table><tr><td colspan=\"7\">Language Method Swedish J48 Random forest Logistic regression 84.77 13.37 MUC6 B 3 61.43 37.78 40.97 CEAF E CEAF M BLANC MELA CoNLL 42.36 41.51 46.73 61.37 37.72 41.03 41.22 46.71 42.46 1.95 16.68 15.5 33.37</td></tr><tr><td>German</td><td>J48</td><td>82.69 19.74</td><td>5.86</td><td>26.75</td><td>19.56</td><td>36.1</td></tr><tr><td/><td colspan=\"2\">Random forest Logistic regression 83.71 77.24 24.16 17.6</td><td>9.53 4.5</td><td>26.94 25.58</td><td>32.72 16.61</td><td>36.98 35.27</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "text": "End-to-end results using predicted mentions", |
| "html": null |
| } |
| } |
| } |
| } |