| { |
| "paper_id": "P15-1039", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:11:38.650342Z" |
| }, |
| "title": "Generating High Quality Proposition Banks for Multilingual Semantic Role Labeling", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Akbik", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "alan.akbik@tu-berlin.de" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Chiticariu", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Marina", |
| "middle": [], |
| "last": "Danilevsky", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Yunyao", |
| "middle": [], |
| "last": "Li", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "yunyaoli@us.ibm.com" |
| }, |
| { |
| "first": "Shivakumar", |
| "middle": [], |
| "last": "Vaithyanathan", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Huaiyu", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "huaiyu@us.ibm.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Semantic role labeling (SRL) is crucial to natural language understanding as it identifies the predicate-argument structure in text with semantic labels. Unfortunately, resources required to construct SRL models are expensive to obtain and simply do not exist for most languages. In this paper, we present a two-stage method to enable the construction of SRL models for resourcepoor languages by exploiting monolingual SRL and multilingual parallel data. Experimental results show that our method outperforms existing methods. We use our method to generate Proposition Banks with high to reasonable quality for 7 languages in three language families and release these resources to the research community.", |
| "pdf_parse": { |
| "paper_id": "P15-1039", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Semantic role labeling (SRL) is crucial to natural language understanding as it identifies the predicate-argument structure in text with semantic labels. Unfortunately, resources required to construct SRL models are expensive to obtain and simply do not exist for most languages. In this paper, we present a two-stage method to enable the construction of SRL models for resourcepoor languages by exploiting monolingual SRL and multilingual parallel data. Experimental results show that our method outperforms existing methods. We use our method to generate Proposition Banks with high to reasonable quality for 7 languages in three language families and release these resources to the research community.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Semantic role labeling (SRL) is the task of automatically labeling predicates and arguments in a sentence with shallow semantic labels. This level of analysis provides a more stable semantic representation across syntactically different sentences, thereby enabling a range of NLP tasks such as information extraction and question answering (Shen and Lapata, 2007; Maqsud et al., 2014) . Projects such as the Proposition Bank (PropBank) (Palmer et al., 2005) spent considerable effort to annotate corpora with semantic labels, in turn enabling supervised learning of statistical SRL parsers for English. Unfor-tunately, due to the high costs of manual annotation, comparable SRL resources do not exist for most other languages, with few exceptions (Haji\u010d et al., 2009; Erk et al., 2003; Zaghouani et al., 2010; Vaidya et al., 2011) .", |
| "cite_spans": [ |
| { |
| "start": 340, |
| "end": 363, |
| "text": "(Shen and Lapata, 2007;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 364, |
| "end": 384, |
| "text": "Maqsud et al., 2014)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 436, |
| "end": 457, |
| "text": "(Palmer et al., 2005)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 747, |
| "end": 767, |
| "text": "(Haji\u010d et al., 2009;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 768, |
| "end": 785, |
| "text": "Erk et al., 2003;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 786, |
| "end": 809, |
| "text": "Zaghouani et al., 2010;", |
| "ref_id": null |
| }, |
| { |
| "start": 810, |
| "end": 830, |
| "text": "Vaidya et al., 2011)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As a cost-effective alternative to manual annotation, previous work has investigated the direct projection of semantic labels from a resource rich language (English) to a resource poor target language (TL) in parallel corpora (Pado, 2007; Van der Plas et al., 2011) . The underlying assumption is that original and translated sentences in parallel corpora are semantically broadly equivalent. Hence, if English sentences of a parallel corpus are automatically labeled using an SRL system, these labels can be projected onto aligned words in the TL corpus, thereby automatically labeling the TL corpus with semantic labels. This way, PropBank-like resources can automatically be created that enable the training of statistical SRL systems for new TLs.", |
| "cite_spans": [ |
| { |
| "start": 226, |
| "end": 238, |
| "text": "(Pado, 2007;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 239, |
| "end": 265, |
| "text": "Van der Plas et al., 2011)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "However, as noted in previous work (Pado, 2007; Van der Plas et al., 2011) , aligned sentences in parallel corpora often exibit issues such as translation Figure 1 : Pair of parallel sentences from Frenchgoldwith word alignments (dotted lines), SRL labels for the English sentence, and gold SRL labels for the French sentence. Only two of the seven English SRL labels should be projected here. shifts that go against this assumption. For example, in Fig. 1 , the English sentence \"We need to hold people responsible\" is translated into a French sentence that literally reads as \"There need to exist those responsible\". Hence, the predicate label of the English word \"hold\" should not be projected onto the French verb, which has a different meaning. As the example in Fig. 1 shows, this means that only a subset of all SL labels can be directly projected.", |
| "cite_spans": [ |
| { |
| "start": 35, |
| "end": 47, |
| "text": "(Pado, 2007;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 48, |
| "end": 74, |
| "text": "Van der Plas et al., 2011)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 155, |
| "end": 163, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 450, |
| "end": 456, |
| "text": "Fig. 1", |
| "ref_id": null |
| }, |
| { |
| "start": 768, |
| "end": 774, |
| "text": "Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we aim to create PropBank-like resources for a range of languages from different language groups. To this end, we propose a two-stage approach to cross-lingual semantic labeling that addresses such errors, shown in Fig. 2 : Given a parallel corpus in which the source language (SL) side is automatically labeled with PropBank labels and the TL side is syntactically parsed, we use a filtered projection approach that allows the projection only of high-confidence SL labels. This results in a TL corpus with low recall but high precision. In the second stage, we repeatedly sample a subset of complete TL sentences and train a classifier to iteratively add new labels, significantly increasing the recall in the TL corpus while retaining the improvement in precision.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 230, |
| "end": 236, |
| "text": "Fig. 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our contributions are: (1) We propose filtered projection focused specifically on raising the precision of projected labels, based on a detailed analysis of direct projection errors. (2) We propose a bootstrap learning approach to retrain the SRL to iteratively improve recall without a significant reduction of precision, especially for arguments; 3We demonstrate the effectiveness and generalizability of our approach via an extensive set of experiments over 7 different language pairs. (4) We generate PropBanks for each of these languages and release them to the research community. 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Stage 1 of our approach (Fig. 2) is designed to create a TL corpus with high precision semantic labels.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 24, |
| "end": 32, |
| "text": "(Fig. 2)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Stage 1: Filtered Annotation Projection", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Direct Projection The idea of direct annotation projection (Van der Plas et al., 2011) is to transfer semantic labels from SL sentences to TL sentences according to word alignments. Formally, for each pair of sentences s SL and s TL in the parallel corpus, the word alignment produces alignment pairs (w SL,i , w TL,i ), where w SL,i and w TL,i are words from s SL and s TL respectively. Under direct projection, if l SL,i is a predicate label for w SL,i and (w SL,i , w TL,i ) is an alignment pair, then l SL,i is transferred to w TL,i ; If l SL,j is a predicate-argument label for (w SL,i , w SL,j ), and (w SL,i , w TL,i ) and (w SL,j , w TL,j ) are alignment pairs, then l SL,j is transferred to (w TL,i , w TL,j ), as illustrated below.", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 86, |
| "text": "(Van der Plas et al., 2011)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stage 1: Filtered Annotation Projection", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Filtered Projection As discussed earlier, direct projection is vulnerable to errors stemming from issues such as translation shifts. We propose filtered projection focused specifically on improving the precision of projected labels. Specifically, for a pair of sentences s SL and s TL in the parallel corpus, we retain the semantic label l SL,i projected from w SL,i onto w TL,i if and only if it satisfies the filtering policies. This results in a target corpus containing fewer labels but of higher precision compared to that obtained via direct projection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stage 1: Filtered Annotation Projection", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stage 1: Filtered Annotation Projection", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We observe that direct projection labels have both low precision and low recall (see Tab. 3 (Direct)). \u2022 Translation Shift: Predicate Mismatch The most common predicate errors (37%) are translation shifts in which an English predicate is aligned to a French verb with a different meaning. Fig. 1 illustrates such a translation shift: label hold.01 of English verb hold is wrongly projected onto the French verb ait, which is labeled as exist.01 in French gold .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 289, |
| "end": 295, |
| "text": "Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u2022 Translation Shift: Verb\u2192Non-Verb is another common predicate error (36%). English verbs may be aligned with TL words other than verbs, which is often indicative of translation shifts. For instance, in the following sentence pair", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of False Negatives", |
| "sec_num": null |
| }, |
| { |
| "text": "sSL We know what happened sFR On connait la suite", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of False Negatives", |
| "sec_num": null |
| }, |
| { |
| "text": "We know the result the English verb happen is aligned to the French noun suite (result), causing it to be wrongly projected with the English predicate label happen.01.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of False Negatives", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Non-Argument Head The most common argument error (33%) is caused by the projection of argument labels onto words other than the syntactic head of a target verb's argument. For example, in Fig. 1 the label A1 on the English hold is wrongly transferred to the French ait, which is not the syntactic head of the complement.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 190, |
| "end": 196, |
| "text": "Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis of False Negatives", |
| "sec_num": null |
| }, |
| { |
| "text": "We consider the following filters to remove the most common types of false positives. Verb Filter (VF) targets Verb\u2192Non-Verb translation shift errors (Van der Plas et al., 2011). Formally, if direct projection transfers predicate label l SL,i from w SL,i onto w TL,i , retain l SL,i only if both w SL,i and w TL,i are verbs. Translation Filter (TF) handles both Predicate Mismatch and Verb\u2192Non-Verb translation shift errors. It makes use of a translation dictionary and allows projection only if the TL verb is a valid translation of the SL verb. In addition, in order to ensure consistent predicate labels throughout the TL corpus, if a SL verb has several possible synonymous translations, it allows projection only for the most commonly observed translation. Formally, for an aligned pair (", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Filters", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "w SL,i , w TL,i ) where w SL,i has predicate label l SL,i , if (w SL,i , w TL,i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Filters", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "is not a verb to verb translation from SL to TL, assign no label to w TL,i . Otherwise, split the set of SL translations of w TL,i into synonym sets S 1 , S 2 , . . . ; For each k, let W k be the subset of S k most commonly aligned with w TL,i ; If w SL,i is in one of these W k , assign label l SL,i to w TL,i ; Otherwise assign no label to w TL,i . Reattachment Heuristic (RH) targets nonargument head errors that occur if a TL argument is not the direct child of a verb in the dependency parse tree of its sentence. 4 Assume direct projection transfers the predicate-argument label l SL,j from (w SL,i , w SL,j ) onto (w TL,i , w TL,j ). Find the immediate ancestor verb of w TL,j in the dependency parse tree. Denote as w TL,k its child that is an ancestor of w TL,j . Assign the label l SL,j to (w TL,i , w TL,k ) instead of (w TL,i , w TL,j ). An illustration is below:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Filters", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "RH ensures that labels are always attached to the syntactic heads of their respective arguments, as de- termined by the dependency tree. An example of such reattachment is illustrated in Fig. 1 (curved arrow on TL sentence).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 187, |
| "end": 193, |
| "text": "Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Filters", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We now present an initial validation on the effectiveness of the aforementioned filters by evaluating their contribution to annotation projection quality for French gold , as summarized in Tab. 3. VF reduces the number of wrongly projected predicate labels, resulting in an increase of predicate precision to 59% (\u219114 pp), without impact to recall. As a side effect, argument precision also increases to 53% (\u219110 pp), since, if a predicate label cannot be projected, none of its arguments can be projected. TF 5 reduces the number of wrongly projected predicate labels even more significantly, increasing predicate precision to 88% (\u219143 pp), at a small cost to recall. Again, argument precision increases as a side effect. However, as expected, argument recall decreases significantly (\u219314 pp, to 17%), as many arguments can no longer be projected. RH targets argument labels directly (unlike VF and TF), significantly increasing argument precision and slightly increasing argument recall. In summary, initial experiments confirm that our proposed filters are effective in improving precision of projected labels at a small cost in recall. In fact, TF+RH results in nearly 100% improvement in predicate and argument labels precision with a much smaller drop in recall.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Filter Effectiveness", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Filtered projection removes the most common errors discussed in Sec. 2.2. Most of the remaining errors come from the following sources. SRL Errors The most common residual errors in the remaining projected labels, especially for argument labels, are caused by mistakes made by the English SRL system. Any wrong label it assigns to an English sentence may be projected onto the TL sentence, resulting in false positives. No English Equivalent A small number of errors occur due to French particularities that do not exist in English. Such errors include certain French verbs for which no appropriate English PropBank labels exists, and French-specific syntactic particularities. 6 Gold Data Errors Our evaluation so far relies on French gold as ground truth. Unfortunately, French gold does contain a small number of errors (e.g. missing argument labels). As a result, some correctly projected labels are being mistaken as false positives, causing a drop in both precision and recall. We therefore expect the true precision and recall of the approach to be somewhat higher than the estimate based on French gold .", |
| "cite_spans": [ |
| { |
| "start": 678, |
| "end": 679, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Residual Errors", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "As discussed earlier, the TL corpus generated via filtered projection suffers from low recall. We address this issue with the second stage of our method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stage 2: Bootstrapped Training of SRL", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Relabeling The idea of relabeling (Van der Plas et al., 2011) is to first train an SRL system over a TL corpus labeled using direct projection (with VF filter) and then use this SRL to relabel the corpus, effectively overwriting the projected labels with potentially less noisy predicted labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stage 2: Bootstrapped Training of SRL", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We first present an analysis on relabeling in concert with our proposed filters (Sec. 3.1), which motivates our bootstrap algorithm (Sec. 3.2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stage 2: Bootstrapped Training of SRL", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We use the same experimental setup as in Sec. 2, and produce a labeled French corpus for each filtered annotation method. We then train an off-the-shelf SRL system (Bj\u00f6rkelund et al., 2009) on each generated corpus and use it to relabel the corpus.", |
| "cite_spans": [ |
| { |
| "start": 164, |
| "end": 189, |
| "text": "(Bj\u00f6rkelund et al., 2009)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of Relabeling Approach", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We measure precision and recall of each resulting TL corpus against French gold (see Tab. 4). Across all 6 French negations, for instance, are split into a particle and a connegative. In the annotation scheme used in Frenchgold, particles and connegatives are labeled differently. experiments, relabeling consistently improves recall over projection. The results also show how different factors affect the performance of relabeling.", |
| "cite_spans": [ |
| { |
| "start": 105, |
| "end": 106, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of Relabeling Approach", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The labels produced by the trained SRL can be used to either overwrite projected labels as in (Van der Plas et al., 2011), or to supplement them (supplying labels only for words w/o projected labels). Whether to overwrite or supplement depends on whether labels produced by the trained SRL are of higher quality than the projected labels. We find that while predicted labels are of higher precision than directly projected labels, they are of lower precision than labels post filtered projection. Therefore, for filtered projection, it makes more sense to allow predicted labels to only supplement projected labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplement vs. Overwrite Projected Labels", |
| "sec_num": null |
| }, |
| { |
| "text": "We are further interested in learning the impact of sampling the data on the quality of relabeling. For the best filter found earlier (TF+RH), we compare SRL trained on the entire data set (full data) with SRL trained only on the subset of completely annotated sentences (comp. sent.), where completeness is defined as: Definition 1. A direct component of a labeled sentence s TL is either a verb in s TL or a syntactic dependent of a verb. Then s TL is k-complete if s TL contains equal to or fewer than k unlabeled direct compo-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Impact of Sampling Method", |
| "sec_num": null |
| }, |
| { |
| "text": "Require: Corpus CTL with initial set of labels LTL, and resampling threshold function k(i); for i = 1 to \u221e do Let ki = k(i); Let CTL comp = {w \u2208 CTL : w \u2208 sTL, sTLis ki-complete}; Let LTL comp be subset of LTL appearing on CTL comp ; Train an SRL on (CTL comp , LTL comp ); Use the SRL to produce label set LTL new on CTL; Let CTL no.lab = {w \u2208 CTL : w not labelled by LTL}; Let LTL suppl be subset of LTL new appearing on CTL no.lab ; if LTL suppl = \u2205 then Return the SRL; end if Let LTL = LTL \u222a LTL suppl ; end for nents. 0-complete is abbreviated as complete.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 1 Bootstrap learning algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "We observe that for TF+RH, when new labels supplement projected labels, relabeling over complete sentences results in better recall at slightly reduced precision, while including incomplete sentences into the training data reduces recall, but improves precision. While this finding may seem counterintuitive, it can be explained by how statistical SRL works. A densely labeled training data (such as comp. sent.) usually results in an SRL that generates densely labeled sentences, resulting in better recall but poorer precision. On the other hand, training data that is sparsely labeled results in an SRL that weighs the option of not assigning a label with higher probability, resulting in better precision and poorer recall. In short, one can control the tradeoff between precision and recall of SRL output by manipulating the completeness of the training data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 1 Bootstrap learning algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "Building on the observation that we can sample data in such a way as to either favor precision or recall, we propose a bootstrapping algorithm to train an SRL iteratively over k-complete subsets of the data which are supplemented by high precision labels produced from previous iteration. The detailed algorithm is depicted in Algorithm 1. Resampling Threshold Our goal is to use bootstrap learning to improve recall without sacrificing too much precision.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bootstrap Learning", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Proposition 1. Under any resampling threshold, the set of labels L TL increases monotonically in each iteration of Algorithm 1. Since Prop. 1 guarantees the increase of the set of labels, we need to select a resampling function to favor precision while improving recall. Specifically, we use the formula k(i) = max(k 0 \u2212 i, 0), where k 0 is sufficiently large. Since the precision of labels generated by the SRL is lower than the precision of labels obtained from filtered projection, the precision of the training data is expected to decrease with the increase in recall. Therefore, starting with a high k seeks to ensure high precision labels are added to the training data in the first iterations. Decreasing k in each iteration seeks to ensure that resampling is done in an increasingly restrictive way to ensure that only high-quality annotated sentences are added to the training data, thus maintaining a high confidence in the learned SRL model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bootstrap Learning", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We experimentally evaluate the effectiveness of our model with k 0 = 9. 7 As shown in Tab 4, bootstrapping outperforms relabeling, producing labels with best overall quality in terms of F 1 measure and recall for both predicates and arguments, with a relatively small cost in precision.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effectiveness of Bootstrapping", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "While Algorithm 1 guarantees the increase of recall (Prop. 1), it provides no such guarantee on precision. Therefore, it is important to experimentally decide an early termination cutoff before the SRL gets overtrained. To do so, we evaluated the performance of the bootstrapping algorithm at each iteration ( Fig. 3) . We observe that for the first 3 iterations, F 1 -measure for both predicates and arguments rises due to large increase in recall which offsets the smaller drop in precision. Then F 1measure remains stable, with recall rising and pre- Dependency parsers: STANFORD: (Green and Manning, 2010) , MATE-G: (Bohnet, 2010) , MATE-T: (Bohnet and Nivre, 2012) , MALT: (Nivre et al., 2006) . Parallel corpora: UN: (Rafalovitch et al., 2009) , Europarl: (Koehn, 2005) , Hindencorp: (Bojar et al., 2014) . Word alignment: The UN corpus is already word-aligned. For others, we use the Berkeley Aligner (DeNero and Liang, 2007) .", |
| "cite_spans": [ |
| { |
| "start": 584, |
| "end": 609, |
| "text": "(Green and Manning, 2010)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 620, |
| "end": 634, |
| "text": "(Bohnet, 2010)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 645, |
| "end": 669, |
| "text": "(Bohnet and Nivre, 2012)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 678, |
| "end": 698, |
| "text": "(Nivre et al., 2006)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 723, |
| "end": 749, |
| "text": "(Rafalovitch et al., 2009)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 762, |
| "end": 775, |
| "text": "(Koehn, 2005)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 790, |
| "end": 810, |
| "text": "(Bojar et al., 2014)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 908, |
| "end": 932, |
| "text": "(DeNero and Liang, 2007)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 310, |
| "end": 317, |
| "text": "Fig. 3)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Effectiveness of Bootstrapping", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "LANGUAGE DEP. PARSER DATA SET #SENTENCE Arabic STANFORD UN 481K Chinese MATE-G UN 2,986K French MATE-T UN 2,542K German MATE-T Europarl 560K Hindi MALT Hindencorp 54K Russian MALT UN 2,638K Spanish MATE-G UN 2,304K", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effectiveness of Bootstrapping", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "cision falling slightly at each iteration until convergence. To optimize precision and avoid overtraining, we set an iteration cutoff of 3. This combination of TF+RH filters, bootstrapping with k 0 = 9 and an iteration cutoff of 3 is used in the rest of our evaluation (Sec. 4), denoted as FB best .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effectiveness of Bootstrapping", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We use our method to generate Proposition Banks for 7 languages and evaluate the generated resources. We seek to answer the following questions: (1) What is the estimated quality for the generated PropBanks? How well does the approach work without language-specific adaptation? (2) Are there notable differences in quality from language to language; if so, why? We also present initial investigations on how different factors affect the performance of our method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multilingual Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Data Tab. 5 lists the 7 different TLs and resources used in our experiments. 8 We chose these TLs because (1) they are among top 10 most influential languages in the world (Weber, 1997) ; and (2) we could find language experts to evaluate the results. English is used as SL in all our experiments.", |
| "cite_spans": [ |
| { |
| "start": 77, |
| "end": 78, |
| "text": "8", |
| "ref_id": null |
| }, |
| { |
| "start": 172, |
| "end": 185, |
| "text": "(Weber, 1997)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Approach Tested For each TL, we used FB best (Sec. 3.3) to generate a corpus with semantic labels. From each TL corpus, we extracted all complete sentences to form the generated PropBanks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "8 From each parallel corpus, we only keep sentences that are considered well-formed based on a set of standard heuristics. For example, we require a well-formed sentence to end in punctuation and not to contain certain special characters. For Arabic, as the dependency parser we use has relatively poor parsing accuracy, we additionally require sentences to be shorter than 100 characters. Table 6 : Estimated precision and recall over seven languages.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 390, |
| "end": 397, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Manual Evaluation While a gold annotated corpus for French (French gold ) was available for our experiments in the previous Sections, no such resources existed for the other TLs we wished to evaluate. We therefore chose to conduct a manual evaluation for each TL, each executed identically: For each TL we randomly selected 100 complete sentences with their generated semantic labels and assigned them to two language experts who were instructed to evaluate the semantic labels (based on their English descriptions) for the predicates and their core arguments. For each label, they were asked to determine (1) whether the label is correct;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(2) if yes, then whether the boundary of the labeled constituent is correct: If also yes, mark the label as fully correct, otherwise as partially correct.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Metrics We used the standard measures of precision, recall, and F1 to measure the performance of the SRLs, with the following two schemes: (1) Exact: Only fully correct labels are considered as true positives;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(2) Partial: Both fully and partially correct matches are considered as true positives. 9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Tab. 6 summarizes the estimated quality of semantic labels generated by our method for all seven TL. As can be seen, our method performed well for all Table 7 : Characteristics of the generated PropBanks.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 151, |
| "end": 158, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "seven languages and generated high quality semantics labels across the board. For predicate labels, the precision is over 95% and the recall is over 85% for all languages except for Hindi. For argument labels, when considering partially correct matches, the precision is at least 85% (above 90% for most languages) and the recall is between 66% to 83% for all the languages. These encouraging results obtained from a diverse set of languages implies the generalizability of our method. In addition, the inter-annotator agreement is very high for all the languages, indicating that the results obtained based on manual evaluation are very reliable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In addition, we make a number of interesting observations: Dependency Parsing Accuracy The precision for exact argument labels is significantly below partial matches, particularly for Hindi (\u219335 pp) and Arabic (\u219319 pp). Since argument boundaries are determined syntactically, such errors are caused by dependency parsing. The fact that Hindi and Arbic suffer the most from this issue is consistent with the poorer performance of their dependency parsers compared to other languages (Nivre et al., 2006; Green and Manning, 2010) . Hindi as the Main Outlier The results for Hindi are much worse than the results for other languages. Besides the poorer dependency parser performance, the size of the parallel corpus used could be a factor: Hindencorp is one to two orders of magnitude smaller than the other corpora. The quality of the parallel corpus could be a reason as well: Hindencorp was collected from various sources, while both UN and Europarl were extracted from governmental proceedings. Language-specific Errors Certain errors occur more frequently in some languages than others. An example are deverbal nouns in Chinese (Xue, 2006) in formal passive constructions with support verb \u53d7. Since we currently only consider verbs for pred- icate labels, predicate labels are projected onto the support verbs instead of the deverbal nouns. Such errors appear for light verb constructions in all languages, but particularly affect Chinese due to the high frequency of this passive construction in the UN corpus. Low Fraction of Complete Sentences As Tab. 7 shows, the fraction of complete sentences in the generated PropBanks is rather low, indicating the impact of moderate recall on the size of generated PropBanks. Especially for languages for which only small parallel corpora are available, such as Hindi, this points to the need to address recall issues in future work.", |
| "cite_spans": [ |
| { |
| "start": 482, |
| "end": 502, |
| "text": "(Nivre et al., 2006;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 503, |
| "end": 527, |
| "text": "Green and Manning, 2010)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1130, |
| "end": 1141, |
| "text": "(Xue, 2006)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The observations made in Sec. 4.2 suggests a few factors that may potentially affect the performance of our method. To better understand their impact, we conducted the following initial investigation. SRL models produced in this set of experiments were evaluated using French gold , sampled and evaluated in the same way as other experiments in this section for comparability. Data Size We varied the data size for French by downsampling the UN corpus. As one can see from Tab. 8, downsampling the dataset by one order of magnitude (to 250k sentences) only slightly affects precision, while downsampling to 25k sentences has a more pronounced but still small impact on recall. It appears that data size does not have significant impact on the performance of our method. Language-specific Customizations While our method is language-agnostic, intuitively languagespecific customization can be helpful in address-ing language-specific errors. As an initial experiment, we added a simple heuristic to filter out French verbs that are commonly used for \"existential there\" constructions, as one type of common errors for French involves the syntactic expletive il (Danlos, 2005) in \"existential there\" constructions such as il faut (see Fig. 1 (TL sentence) for an example) wrongly labeled with with role information. As shown in Tab. 9, this simple customization results in a small increase in precision, suggesting that language-specific customization can be helpful. Quality of English SRL As noted in Sec. 2.5, errors made by English SRL are often prorogated to the TL via projection. To assess the impact of English SRL quality, we used two different English SRL systems: CLEARNLP and MATE-SRL. As can be seen from Tab. 9, the impact of English SRL quality is substantial on argument labeling.", |
| "cite_spans": [ |
| { |
| "start": 1160, |
| "end": 1174, |
| "text": "(Danlos, 2005)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1233, |
| "end": 1253, |
| "text": "Fig. 1 (TL sentence)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Additional Experiments", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "To facilitate future research on multilingual SRL, we release the created PropBanks for all 7 languages to the research community to encourage further research. Tab. 7 gives an overview over the resources.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multilingual PropBanks", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Annotation Projection in Parallel Corpora to train monolingual tools for new languages was introduced in the context of learning a PoS tagger (Yarowsky et al., 2001) . Similar in spirit to our approach of using filters to increase the precision of projected labels, recent work (T\u00e4ckstr\u00f6m et al., 2013) uses token and type constraints to guide learning in cross-lingual PoS tagging. Projection of Semantic Labels was considered for FrameNet (Baker et al., 1998) in (Pad\u00f3 and Lapata, 2009; Basili et al., 2009) . Recently, however, most work in the area focuses on PropBank, which has been identified as a more suitable annotation scheme for joint syntactic-semantics settings due to broader coverage (Merlo and van der Plas, 2009) , and was shown to be usable for languages other than English (Monachesi et al., 2007) . Direct projection of PropBank annotations was considered in (Van der Plas et al., 2011) . Our approach significantly outperforms theirs in terms of recall and F 1 for both predicates and arguments (Section 3). A approach was proposed in (Van der Plas et al., 2014) in which information is aggregated at the corpus level, resulting in a significantly better SRL corpus for French. However, this approach has several practical limitations: (1) it does not consider the problem of argument identification of SRL systems, treating arguments as already given; (2) it generates rules for the argument classification step preferably from manually annotated data; (3) it has been demonstrated for a single language (French), and was not applied to any other language. In contrast, our approach trains an SRL system for both predicate and argument labels, in a completely automatic fashion. Furthermore, we have applied our approach to generate PropBanks for 7 languages and conducted experiments that indicate a high F 1 measure for all languages (Section 4). Other Related Work A number of approaches such as model transfer (Kozhevnikov and Titov, 2013) and role induction (Titov and Klementiev, 2012) exist for the argument classification step in the SRL pipeline. In contrast, our work addresses the full SRL pipeline and seeks to generate SRL resources for TLs with English PropBank labels.", |
| "cite_spans": [ |
| { |
| "start": 142, |
| "end": 165, |
| "text": "(Yarowsky et al., 2001)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 278, |
| "end": 302, |
| "text": "(T\u00e4ckstr\u00f6m et al., 2013)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 441, |
| "end": 461, |
| "text": "(Baker et al., 1998)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 465, |
| "end": 488, |
| "text": "(Pad\u00f3 and Lapata, 2009;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 489, |
| "end": 509, |
| "text": "Basili et al., 2009)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 700, |
| "end": 730, |
| "text": "(Merlo and van der Plas, 2009)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 793, |
| "end": 817, |
| "text": "(Monachesi et al., 2007)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 880, |
| "end": 907, |
| "text": "(Van der Plas et al., 2011)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 1057, |
| "end": 1084, |
| "text": "(Van der Plas et al., 2014)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 1937, |
| "end": 1966, |
| "text": "(Kozhevnikov and Titov, 2013)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1986, |
| "end": 2014, |
| "text": "(Titov and Klementiev, 2012)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We proposed a two-staged method to construct multilingual SRL resources using monolingual SRL and parallel data and showed that our method outperforms previous approaches in both precision and recall. More importantly, through comprehensive experiments over seven languages from three language families, we show that our proposed method works well across different languages without any language specific customization. Preliminary results from additional experiments indicate that better English SRL and language-specific customization can further improve the results, which we aim to investigate in future work. A qualitative comparison against existing or under-construction Prop-Banks for Chinese (Xue, 2008) , Hindi (Vaidya et al., 2011) or Arabic (Zaghouani et al., 2010) may be interesting, both for comparison of resources and for defining language-specific customizations. In addition, we plan to expand our experiments both to more languages as well as NomBank (Meyers et al., 2004) -style noun labels.", |
| "cite_spans": [ |
| { |
| "start": 701, |
| "end": 712, |
| "text": "(Xue, 2008)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 721, |
| "end": 742, |
| "text": "(Vaidya et al., 2011)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 971, |
| "end": 992, |
| "text": "(Meyers et al., 2004)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The resources are available on request.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For instance, the French verb sembler may be correctly labeled as either of the synonyms: seem.01 or appear.02.3 This upper bound is different from the one reported in (Van der Plas et al., 2011) which corresponds to the interannotator agreement over manual annotation of 100 sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In(Pad\u00f3 and Lapata, 2009), a similar filtering method is defined over constituent-based trees to reduce the set of viable nodes for argument labels to all nodes that are not a child of some ancestor of the predicate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In all experiments in this paper, we derived the translation dictionaries from the WIKTIONARY project and used VERBNET and WORDNET to find SL synonym groups.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We found that setting k0 to larger values had little impact on the final results .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that since the manually evaluated semantic labels are only a small fraction of the labels generated, the performance numbers obtained from manual evaluation is only an estimate of the actual quality for the generated resources.Thus the numbers obtained based on manual evaluation cannot be directly compared against the numbers computed over Frenchgold.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The berkeley framenet project", |
| "authors": [], |
| "year": 1998, |
| "venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "86--90", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "et al.1998] Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In Proceedings of the 36th Annual Meeting of the As- sociation for Computational Linguistics and 17th In- ternational Conference on Computational Linguistics- Volume 1, pages 86-90. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Cross-language frame semantics transfer in bilingual corpora", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Basili", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "43--48", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Basili et al.2009] Roberto Basili, Diego De Cao, Danilo Croce, Bonaventura Coppola, and Alessandro Mos- chitti. 2009. Cross-language frame semantics trans- fer in bilingual corpora. In Computational Linguis- tics and Intelligent Text Processing, pages 332-345. Springer. [Bj\u00f6rkelund et al.2009] Anders Bj\u00f6rkelund, Love Hafdell, and Pierre Nugues. 2009. Multilingual se- mantic role labeling. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task, pages 43-48. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A transition-based system for joint part-of-speech tagging and labeled non-projective dependency parsing", |
| "authors": [ |
| { |
| "first": "Bernd", |
| "middle": [], |
| "last": "Bohnet", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Bohnet and Nivre2012] Bernd Bohnet and Joakim Nivre. 2012. A transition-based system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proceedings of the 2012", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Natural Language Processing and Computational Natural Language Learning", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1455--1465", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1455-1465. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Very high accuracy and fast dependency parsing is not a contradiction", |
| "authors": [ |
| { |
| "first": "Bernd", |
| "middle": [], |
| "last": "Bohnet", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "89--97", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernd Bohnet. 2010. Very high accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 89-97. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Hindencorp-hindi-english and hindi-only corpus for machine translation", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Bojar et al.2014] Ond\u0159ej Bojar, Vojt\u011bch Diatka, Pavel Rychl\u1ef3, Pavel Stra\u0148\u00e1k, V\u00edt Suchomel, Ale\u0161 Tamchyna, Daniel Zeman, et al. 2014. Hindencorp-hindi-english and hindi-only corpus for machine translation. In Proceedings of the Ninth International Conference on Language Resources and Evaluation.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Transition-based dependency parsing with selectional branching", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Mccallum2013] Jinho", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Choi", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Choi and McCallum2013] Jinho D. Choi and Andrew McCallum. 2013. Transition-based dependency pars- ing with selectional branching. In Proceedings of the 51st Annual Meeting of the Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Automatic recognition of french expletive pronoun occurrences", |
| "authors": [ |
| { |
| "first": "Laurence", |
| "middle": [], |
| "last": "Danlos", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Natural language processing. Proceedings of the 2nd International Joint Conference on Natural Language Processing (IJCNLP-05)", |
| "volume": "", |
| "issue": "", |
| "pages": "73--78", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laurence Danlos. 2005. Automatic recog- nition of french expletive pronoun occurrences. In Natural language processing. Proceedings of the 2nd International Joint Conference on Natural Language Processing (IJCNLP-05), pages 73-78. Citeseer. [DeNero and Liang2007] John DeNero and Percy Liang. 2007. The Berkeley Aligner. http://code. google.com/p/berkeleyaligner/.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Towards a resource for lexical semantics: A large german corpus with extensive semantic annotation", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Erk", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Erk et al.2003] K. Erk, A. Kowalski, S. Pado, and S. Pinkal. 2003. Towards a resource for lexical se- mantics: A large german corpus with extensive se- mantic annotation. In ACL.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Better arabic parsing: Baselines, evaluations, and analysis", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Green", |
| "suffix": "" |
| }, |
| { |
| "first": "Spence", |
| "middle": [], |
| "last": "Green", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "394--402", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Green and Manning2010] Spence Green and Christo- pher D Manning. 2010. Better arabic parsing: Base- lines, evaluations, and analysis. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 394-402. Association for Compu- tational Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "1--18", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Haji\u010d et al.2009] Jan Haji\u010d, Massimiliano Cia- ramita, Richard Johansson, Daisuke Kawahara, Maria Ant\u00f2nia Mart\u00ed, Llu\u00eds M\u00e0rquez, Adam Mey- ers, Joakim Nivre, Sebastian Pad\u00f3, Jan\u0160t\u011bp\u00e1nek, et al. 2009. The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task, pages 1-18. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Europarl: A parallel corpus for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "MT summit", |
| "volume": "5", |
| "issue": "", |
| "pages": "79--86", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn. 2005. Europarl: A paral- lel corpus for statistical machine translation. In MT summit, volume 5, pages 79-86.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Cross-lingual transfer of semantic role labeling models", |
| "authors": [ |
| { |
| "first": "Mikhail", |
| "middle": [], |
| "last": "Kozhevnikov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "ACL (1)", |
| "volume": "", |
| "issue": "", |
| "pages": "1190--1200", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Kozhevnikov and Titov2013] Mikhail Kozhevnikov and Ivan Titov. 2013. Cross-lingual transfer of semantic role labeling models. In ACL (1), pages 1190-1200.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Nerdle: Topic-specific question answering using wikia seeds", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Maqsud", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "25th International Conference on Computational Linguistics, Proceedings of the Conference System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "81--85", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Maqsud et al.2014] Umar Maqsud, Sebastian Arnold, Michael H\u00fclfenhaus, and Alan Akbik. 2014. Ner- dle: Topic-specific question answering using wikia seeds. In Lamia Tounsi and Rafal Rak, editors, COL- ING 2014, 25th International Conference on Compu- tational Linguistics, Proceedings of the Conference System Demonstrations, August 23-29, 2014, Dublin, Ireland, pages 81-85. ACL.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Abstraction and generalisation in semantic role labels: Propbank, verbnet or both", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Merlo", |
| "suffix": "" |
| }, |
| { |
| "first": "Paola", |
| "middle": [], |
| "last": "Van Der Plas2009]", |
| "suffix": "" |
| }, |
| { |
| "first": "Lonneke", |
| "middle": [], |
| "last": "Merlo", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Van Der Plas", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "ACL 2009", |
| "volume": "", |
| "issue": "", |
| "pages": "288--296", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Merlo and van der Plas2009] Paola Merlo and Lonneke van der Plas. 2009. Abstraction and generalisation in semantic role labels: Propbank, verbnet or both? In ACL 2009, pages 288-296.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Annotating noun argument structure for nombank", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Meyers", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "LREC", |
| "volume": "4", |
| "issue": "", |
| "pages": "803--806", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Meyers et al.2004] Adam Meyers, Ruth Reeves, Cather- ine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. Annotat- ing noun argument structure for nombank. In LREC, volume 4, pages 803-806.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Adding semantic role annotation to a corpus of written dutch", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Monachesi", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Linguistic Annotation Workshop, LAW '07", |
| "volume": "", |
| "issue": "", |
| "pages": "77--84", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Monachesi et al.2007] Paola Monachesi, Gerwert Stevens, and Jantine Trapman. 2007. Adding seman- tic role annotation to a corpus of written dutch. In Proceedings of the Linguistic Annotation Workshop, LAW '07, pages 77-84.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Maltparser: A data-driven parsergenerator for dependency parsing", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of LREC", |
| "volume": "6", |
| "issue": "", |
| "pages": "2216--2219", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Nivre et al.2006] Joakim Nivre, Johan Hall, and Jens Nilsson. 2006. Maltparser: A data-driven parser- generator for dependency parsing. In Proceedings of LREC, volume 6, pages 2216-2219.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Cross-lingual annotation projection for semantic roles", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Pad\u00f3", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Journal of Artificial Intelligence Research", |
| "volume": "36", |
| "issue": "1", |
| "pages": "307--340", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Pad\u00f3 and Lapata2009] Sebastian Pad\u00f3 and Mirella Lap- ata. 2009. Cross-lingual annotation projection for semantic roles. Journal of Artificial Intelligence Re- search, 36(1):307-340.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Cross-Lingual Annotation Projection Models for Role-Semantic Information", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Pado", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sebastian Pado. 2007. Cross-Lingual Anno- tation Projection Models for Role-Semantic Informa- tion. Ph.D. thesis, Saarland University. MP.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "The proposition bank: An annotated corpus of semantic roles", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Computational linguistics", |
| "volume": "31", |
| "issue": "1", |
| "pages": "71--106", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Palmer et al.2005] Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics, 31(1):71-106.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "United nations general assembly resolutions: A six-language parallel corpus", |
| "authors": [ |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Rafalovitch", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Dale", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the MT Summit", |
| "volume": "12", |
| "issue": "", |
| "pages": "292--299", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Rafalovitch et al.2009] Alexandre Rafalovitch, Robert Dale, et al. 2009. United nations general assembly resolutions: A six-language parallel corpus. In Pro- ceedings of the MT Summit, volume 12, pages 292- 299.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Verbnet: A Broad-coverage, Comprehensive Verb Lexicon", |
| "authors": [ |
| { |
| "first": "Karin Kipper", |
| "middle": [], |
| "last": "Schuler", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karin Kipper Schuler. 2005. Verbnet: A Broad-coverage, Comprehensive Verb Lexicon. Ph.D. thesis, University of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Using semantic roles to improve question answering", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "EMNLP-CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "12--21", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Shen and Lapata2007] Dan Shen and Mirella Lapata. 2007. Using semantic roles to improve question an- swering. In EMNLP-CoNLL, pages 12-21. Citeseer.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Token and type constraints for cross-lingual part-of-speech tagging", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "T\u00e4ckstr\u00f6m", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[T\u00e4ckstr\u00f6m et al.2013] Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. Transactions of the Associa- tion for Computational Linguistics, 1:1-12.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Crosslingual induction of semantic roles", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Klementiev2012] Ivan Titov", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Klementiev", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "647--656", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Titov and Klementiev2012] Ivan Titov and Alexandre Klementiev. 2012. Crosslingual induction of seman- tic roles. In ACL, pages 647-656.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Analysis of the hindi proposition bank using dependency structure", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Vaidya", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 5th Linguistic Annotation Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "21--29", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Vaidya et al.2011] Ashwini Vaidya, Jinho D Choi, Martha Palmer, and Bhuvana Narasimhan. 2011. Analysis of the hindi proposition bank using depen- dency structure. In Proceedings of the 5th Linguistic Annotation Workshop, pages 21-29. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Cross-lingual validity of propbank in the manual annotation of french", |
| "authors": [ |
| { |
| "first": "Plas", |
| "middle": [], |
| "last": "Van Der", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the Fourth Linguistic Annotation Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "113--117", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Van der Plas et al.2010] Lonneke Van der Plas, Tanja Samard\u017ei\u0107, and Paola Merlo. 2010. Cross-lingual va- lidity of propbank in the manual annotation of french. In Proceedings of the Fourth Linguistic Annotation Workshop, pages 113-117. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Global methods for cross-lingual semantic role and predicate labelling", |
| "authors": [ |
| { |
| "first": "Plas", |
| "middle": [], |
| "last": "Van Der", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers", |
| "volume": "2", |
| "issue": "", |
| "pages": "1279--1290", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Van der Plas et al.2011] Lonneke Van der Plas, Paola Merlo, and James Henderson. 2011. Scaling up auto- matic cross-lingual semantic role annotation. In Pro- ceedings of the 49th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 299-304. Association for Computational Linguistics. [Van der Plas et al.2014] Lonneke Van der Plas, Mari- anna Apidianaki, Rue John von Neumann, and Chen- hua Chen. 2014. Global methods for cross-lingual se- mantic role and predicate labelling. In Proceedings of the 25th International Conference on Computational Linguistics (COLING 2014), pages 1279-1290. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Top languages: The world's 10 most influential languages. Language Today", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Weber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George Weber. 1997. Top languages: The world's 10 most influential languages. Language To- day, December.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Semantic role labeling of nominalized predicates in chinese", |
| "authors": [ |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "431--438", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nianwen Xue. 2006. Semantic role labeling of nominalized predicates in chinese. In Proceedings of the main conference on Human Language Technol- ogy Conference of the North American Chapter of the Association of Computational Linguistics, pages 431- 438. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Labeling chinese predicates with semantic roles", |
| "authors": [ |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computational linguistics", |
| "volume": "34", |
| "issue": "2", |
| "pages": "225--255", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nianwen Xue. 2008. Labeling chinese predi- cates with semantic roles. Computational linguistics, 34(2):225-255.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Inducing multilingual text analysis tools via robust projection across aligned corpora", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the first international conference on Human language technology research", |
| "volume": "", |
| "issue": "", |
| "pages": "222--226", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Yarowsky et al.2001] David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the first international conference on Human language technology research, pages 1-8. Association for Computational Linguistics. [Zaghouani et al.2010] Wajdi Zaghouani, Mona Diab, Aous Mansouri, Sameer Pradhan, and Martha Palmer. 2010. The revised arabic propbank. In Proceedings of the Fourth Linguistic Annotation Workshop, pages 222-226. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Overview of the proposed two-stage approach for projecting English (EN) semantic role labels onto a TL corpus.", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Values at each bootstrap iteration.", |
| "uris": null |
| }, |
| "TABREF3": { |
| "text": "Breakdown of error classes in argument projection.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>Analysis of False Positives While the recall pro-</td></tr><tr><td>duced by direct projection is close to the theoretical</td></tr><tr><td>upper bound, the precision is far from the theoretical</td></tr><tr><td>upper bound of 100%. To understand causes of false</td></tr><tr><td>positives, we examine a random sample of 200 false</td></tr><tr><td>positives, of which 100 are incorrect predicate la-</td></tr><tr><td>bels, and 100 are incorrect argument labels belong-</td></tr><tr><td>ing to correctly projected predicates. Tab. 1 and 2</td></tr><tr><td>show the detailed breakdown of errors for predicates</td></tr><tr><td>and arguments, respectively. We first analyze the</td></tr><tr><td>most common types of errors and discuss the resid-</td></tr><tr><td>ual errors later in Sec. 2.5.</td></tr></table>" |
| }, |
| "TABREF5": { |
| "text": "Quality of predicate and argument labels for different projection methods on Frenchgold, including upper bound.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF7": { |
| "text": "Experiments on Frenchgold, with different projection and SRL training methods. SP=Supplement; OW=Overwrite.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF8": { |
| "text": "Experimental setup .", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF12": { |
| "text": "Estimated impact of downsampling parallel corpus.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td/><td/><td colspan=\"2\">PREDICATE</td><td colspan=\"3\">ARGUMENT</td></tr><tr><td>HEURISTIC</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>none * none * *</td><td colspan=\"5\">0.87 0.81 0.84 0.86 0.74 0.88 0.8 0.84 0.76 0.65</td><td>0.8 0.7</td></tr><tr><td colspan=\"4\">customization * 0.87 0.81 0.84</td><td>0.9</td><td colspan=\"2\">0.74 0.81</td></tr></table>" |
| }, |
| "TABREF13": { |
| "text": "Impact of English SRLs ( * =CLEARNLP, * * =MATE-SRL) and language-spec. customization (filter synt. expletive).", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| } |
| } |
| } |
| } |