| { |
| "paper_id": "Q14-1005", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:11:31.937353Z" |
| }, |
| "title": "Cross-lingual Projected Expectation Regularization for Weakly Supervised Learning", |
| "authors": [ |
| { |
| "first": "Mengqiu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stanford University Stanford", |
| "location": { |
| "postCode": "94305", |
| "region": "CA", |
| "country": "USA" |
| } |
| }, |
| "email": "mengqiu@cs.stanford.edu" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stanford University Stanford", |
| "location": { |
| "postCode": "94305", |
| "region": "CA", |
| "country": "USA" |
| } |
| }, |
| "email": "manning@cs.stanford.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We consider a multilingual weakly supervised learning scenario where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide the learning in other languages. Past approaches project labels across bitext and use them as features or gold labels for training. We propose a new method that projects model expectations rather than labels, which facilities transfer of model uncertainty across language boundaries. We encode expectations as constraints and train a discriminative CRF model using Generalized Expectation Criteria (Mann and McCallum, 2010). Evaluated on standard Chinese-English and German-English NER datasets, our method demonstrates F 1 scores of 64% and 60% when no labeled data is used. Attaining the same accuracy with supervised CRFs requires 12k and 1.5k labeled sentences. Furthermore, when combined with labeled examples, our method yields significant improvements over state-of-the-art supervised methods, achieving best reported numbers to date on Chinese OntoNotes and German CoNLL-03 datasets.", |
| "pdf_parse": { |
| "paper_id": "Q14-1005", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We consider a multilingual weakly supervised learning scenario where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide the learning in other languages. Past approaches project labels across bitext and use them as features or gold labels for training. We propose a new method that projects model expectations rather than labels, which facilities transfer of model uncertainty across language boundaries. We encode expectations as constraints and train a discriminative CRF model using Generalized Expectation Criteria (Mann and McCallum, 2010). Evaluated on standard Chinese-English and German-English NER datasets, our method demonstrates F 1 scores of 64% and 60% when no labeled data is used. Attaining the same accuracy with supervised CRFs requires 12k and 1.5k labeled sentences. Furthermore, when combined with labeled examples, our method yields significant improvements over state-of-the-art supervised methods, achieving best reported numbers to date on Chinese OntoNotes and German CoNLL-03 datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Supervised statistical learning methods have enjoyed great popularity in Natural Language Processing (NLP) over the past decade. The success of supervised methods depends heavily upon the availability of large amounts of annotated training data. Manual curation of annotated corpora is a costly and time consuming process. To date, most annotated resources resides within the English language, which hinders the adoption of supervised learning methods in many multilingual environments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To minimize the need for annotation, significant progress has been made in developing unsupervised and semi-supervised approaches to NLP (Collins and Singer 1999; Klein 2005; Liang 2005; Smith 2006 ; Goldberg 2010; inter alia) . More recent paradigms for semi-supervised learning allow modelers to directly encode knowledge about the task and the domain as constraints to guide learning (Chang et al., 2007; Mann and McCallum, 2010; Ganchev et al., 2010) . However, in a multilingual setting, coming up with effective constraints require extensive knowledge of the foreign 1 language.", |
| "cite_spans": [ |
| { |
| "start": 137, |
| "end": 162, |
| "text": "(Collins and Singer 1999;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 163, |
| "end": 174, |
| "text": "Klein 2005;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 175, |
| "end": 186, |
| "text": "Liang 2005;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 187, |
| "end": 197, |
| "text": "Smith 2006", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 387, |
| "end": 407, |
| "text": "(Chang et al., 2007;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 408, |
| "end": 432, |
| "text": "Mann and McCallum, 2010;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 433, |
| "end": 454, |
| "text": "Ganchev et al., 2010)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Bilingual parallel text (bitext) lends itself as a medium to transfer knowledge from a resource-rich language to a foreign languages. Yarowsky and Ngai (2001) project labels produced by an English tagger to the foreign side of bitext, then use the projected labels to learn a HMM model. More recent work applied the projection-based approach to more language-pairs, and further improved performance through the use of type-level constraints from tag dictionary and feature-rich generative or discriminative models (Das and Petrov, 2011; T\u00e4ckstr\u00f6m et al., 2013) .", |
| "cite_spans": [ |
| { |
| "start": 134, |
| "end": 158, |
| "text": "Yarowsky and Ngai (2001)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 514, |
| "end": 536, |
| "text": "(Das and Petrov, 2011;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 537, |
| "end": 560, |
| "text": "T\u00e4ckstr\u00f6m et al., 2013)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In our work, we propose a new projection-based method that differs in two important ways. First, we never explicitly project the labels. Instead, we project expectations over the labels. This projection acts as a soft constraint over the labels, which allows us to transfer more information and uncertainty across language boundaries. Secondly, we encode the expectations as constraints and train a model by minimizing divergence between model expectations and projected expectations in a Generalized Expectation (GE) Criteria (Mann and McCallum, 2010) framework.", |
| "cite_spans": [ |
| { |
| "start": 527, |
| "end": 552, |
| "text": "(Mann and McCallum, 2010)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We evaluate our approach on Named Entity Recognition (NER) tasks for English-Chinese and English-German language pairs on standard public datasets. We report results in two settings: a weakly supervised setting where no labeled data or a small amount of labeled data is available, and a semisupervised settings where labeled data is available, but we can gain predictive power by learning from unlabeled bitext.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Most semi-supervised learning approaches embody the principle of learning from constraints. There are two broad categories of constraints: multi-view constraints, and external knowledge constraints.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Examples of methods that explore multi-view constraints include self-training (Yarowsky, 1995; McClosky et al., 2006 ), 2 co-training (Blum and Mitchell, 1998; Sindhwani et al., 2005) , multiview learning (Ando and Zhang, 2005; Carlson et al., 2010) , and discriminative and generative model combination (Suzuki and Isozaki, 2008; Druck and McCallum, 2010) .", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 94, |
| "text": "(Yarowsky, 1995;", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 95, |
| "end": 116, |
| "text": "McClosky et al., 2006", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 134, |
| "end": 159, |
| "text": "(Blum and Mitchell, 1998;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 160, |
| "end": 183, |
| "text": "Sindhwani et al., 2005)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 205, |
| "end": 227, |
| "text": "(Ando and Zhang, 2005;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 228, |
| "end": 249, |
| "text": "Carlson et al., 2010)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 304, |
| "end": 330, |
| "text": "(Suzuki and Isozaki, 2008;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 331, |
| "end": 356, |
| "text": "Druck and McCallum, 2010)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "An early example of using knowledge as constraints in weakly-supervised learning is the work by Collins and Singer (1999) . They showed that the addition of a small set of \"seed\" rules greatly improve a co-training style unsupervised tagger. Chang et al. (2007) proposed a constraint-driven learning (CODL) framework where constraints are used to guide the selection of best self-labeled examples to be included as additional training data in an iterative EM-style procedure. The kind of constraints used in applications such as NER are the ones like \"the words CA, Australia, NY are LOCATION\" (Chang et al., 2007) . Notice the similarity of this partic-ular constraint to the kinds of features one would expect to see in a discriminative MaxEnt model. The difference is that instead of learning the validity (or weight) of this feature from labeled examples -since we do not have them -we can constrain the model using our knowledge of the domain. also demonstrated that in an active learning setting where annotation budget is limited, it is more efficient to label features than examples. Other sources of knowledge include lexicons and gazetteers (Druck et al., 2007; Chang et al., 2007) .", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 121, |
| "text": "Collins and Singer (1999)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 242, |
| "end": 261, |
| "text": "Chang et al. (2007)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 594, |
| "end": 614, |
| "text": "(Chang et al., 2007)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1151, |
| "end": 1171, |
| "text": "(Druck et al., 2007;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1172, |
| "end": 1191, |
| "text": "Chang et al., 2007)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "While it is straight-forward to see how resources such as a list of city names can give a lot of mileage in recognizing locations, we are also exposed to the danger of over-committing to hard constraints. For example, it becomes problematic with city names that are ambiguous, such as Augusta, Georgia. 3 To soften these constraints, Mann and McCallum (2010) proposed the Generalized Expectation (GE) Criteria framework, which encodes constraints as a regularization term over some score function that measures the divergence between the model's expectation and the target expectation. The connection between GE and CODL is analogous to the relationship between hard (Viterbi) EM and soft EM, as illustrated by Samdani et al. (2012) .", |
| "cite_spans": [ |
| { |
| "start": 303, |
| "end": 304, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 334, |
| "end": 358, |
| "text": "Mann and McCallum (2010)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 711, |
| "end": 732, |
| "text": "Samdani et al. (2012)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Another closely related work is the Posterior Regularization (PR) framework by Ganchev et al. (2010) . In fact, as Bellare et al. (2009) have shown, in a discriminative model these two methods optimize exactly the same objective. 4 The two differ in optimization details: PR uses a EM algorithm to approximate the gradients which avoids the expensive computation of a covariance matrix between features and constraints, whereas GE directly calculates the gradient. However, later results (Druck, 2011) have shown that using the Expectation Semiring techniques of Li and Eisner (2009) , one can compute the exact gradients of GE in a Conditional Random Fields (CRF) (Lafferty et al., 2001 ) at costs no greater than computing the gradients of ordinary CRF. And empirically, GE tends to perform more accurately than PR (Bellare et al., 2009; Druck, 2011) .", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 100, |
| "text": "Ganchev et al. (2010)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 115, |
| "end": 136, |
| "text": "Bellare et al. (2009)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 230, |
| "end": 231, |
| "text": "4", |
| "ref_id": null |
| }, |
| { |
| "start": 488, |
| "end": 501, |
| "text": "(Druck, 2011)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 563, |
| "end": 583, |
| "text": "Li and Eisner (2009)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 665, |
| "end": 687, |
| "text": "(Lafferty et al., 2001", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 817, |
| "end": 839, |
| "text": "(Bellare et al., 2009;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 840, |
| "end": 852, |
| "text": "Druck, 2011)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Obtaining appropriate knowledge resources for constructing constraints remain as a bottleneck in applying GE and PR to new languages. However, a number of past work recognizes parallel bitext as a rich source of linguistic constraints, naturally captured in the translations. As a result, bitext has been effectively utilized for unsupervised multilingual grammar induction (Alshawi et al., 2000; , parsing (Burkett and Klein, 2008) , and sequence labeling .", |
| "cite_spans": [ |
| { |
| "start": 374, |
| "end": 396, |
| "text": "(Alshawi et al., 2000;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 407, |
| "end": 432, |
| "text": "(Burkett and Klein, 2008)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A number of recent work also explored bilingual constraints in the context of simultaneous bilingual tagging, and showed that enforcing agreements between language pairs give superior results than monolingual tagging (Burkett et al., 2010; Che et al., 2013; Wang et al., 2013a) . Burkett et al. (2010) also demonstrated a uptraining setting where tag-induced bitext can be used as additional monolingual training data to improve monolingual taggers. A major drawback of this approach is that it requires a readily-trained tagging models in each languages, which makes a weakly supervised setting infeasible. Another intricacy of this approach is that it only works when the two models have comparable strength, since mutual agreements are enforced between them.", |
| "cite_spans": [ |
| { |
| "start": 217, |
| "end": 239, |
| "text": "(Burkett et al., 2010;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 240, |
| "end": 257, |
| "text": "Che et al., 2013;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 258, |
| "end": 277, |
| "text": "Wang et al., 2013a)", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 280, |
| "end": 301, |
| "text": "Burkett et al. (2010)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Projection-based methods can be very effective in weakly-supervised scenarios, as demonstrated by Yarowsky and Ngai (2001) , and Xi and Hwa (2005) . One problem with projected labels is that they are often too noisy to be directly used as training signals. To mitigate this problem, Das and Petrov (2011) designed a label propagation method to automatically induce a tag lexicon for the foreign language to smooth the projected labels. Fossum and Abney (2005) filter out projection noise by combining projections from from multiple source languages. However, this approach is not always viable since it relies on having parallel bitext from multiple source languages. Li et al. (2012) proposed the use of crowd-sourced Wiktionary as additional resources for inducing tag lexicons. More recently, T\u00e4ckstr\u00f6m et al. (2013) combined token-level and type-level constraints to constrain legitimate label sequences and and recalibrate the probability distri-bution in a CRF. The tag dictionary used for POS tagging are analogous to the gazetteers and name lexicons used for NER by Chang et al. (2007) .", |
| "cite_spans": [ |
| { |
| "start": 98, |
| "end": 122, |
| "text": "Yarowsky and Ngai (2001)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 129, |
| "end": 146, |
| "text": "Xi and Hwa (2005)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 283, |
| "end": 304, |
| "text": "Das and Petrov (2011)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 436, |
| "end": 459, |
| "text": "Fossum and Abney (2005)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 668, |
| "end": 684, |
| "text": "Li et al. (2012)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 796, |
| "end": 819, |
| "text": "T\u00e4ckstr\u00f6m et al. (2013)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 1074, |
| "end": 1093, |
| "text": "Chang et al. (2007)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our work is also closely related to Ganchev et al. (2009) . They used a two-step projection method similar to Das and Petrov (2011) for dependency parsing. Instead of using the projected linguistic structures as ground truth (Yarowsky and Ngai, 2001) , or as features in a generative model (Das and Petrov, 2011) , they used them as constraints in a PR framework. Our work differs by projecting expectations rather than Viterbi one-best labels. We also choose the GE framework over PR. Experiments in Bellare et al. (2009) and Druck (2011) suggest that in a discriminative model (like ours), GE is more accurate than PR. More recently, Ganchev and Das (2013) further extended this line of work to directly train discriminative sequence models using cross lingual projection with PR. The types of constraints applied in this new work are similar to the ones in the monolingual PR setting proposed by Ganchev et al. (2010) , where the total counts of labels of a particular kind are expected to match some fraction of the projected total counts. Our work differ in that we enforce expectation constraints at token level, which gives tighter guidance to learning the model.", |
| "cite_spans": [ |
| { |
| "start": 36, |
| "end": 57, |
| "text": "Ganchev et al. (2009)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 110, |
| "end": 131, |
| "text": "Das and Petrov (2011)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 225, |
| "end": 250, |
| "text": "(Yarowsky and Ngai, 2001)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 290, |
| "end": 312, |
| "text": "(Das and Petrov, 2011)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 501, |
| "end": 522, |
| "text": "Bellare et al. (2009)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 527, |
| "end": 539, |
| "text": "Druck (2011)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 636, |
| "end": 658, |
| "text": "Ganchev and Das (2013)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 899, |
| "end": 920, |
| "text": "Ganchev et al. (2010)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Given bitext between English and a foreign language, our goal is to learn a CRF model in the foreign language from little or no labeled data. Our method performs Cross-Lingual Projected Expectation Regularization (CLiPER).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For every aligned sentence pair in the bitext, we first compute the posterior marginal at each word position on the English side using a pre-trained English CRF tagger; then for each aligned English word, we project its posterior marginal as expectations to the aligned word position on the foreign side. Figure 1 shows a snippet of a sentence from real corpus. Notice that if we were to directly project the Viterbi best assignment from English to Chinese, all three Chinese words that are named entities would have gotten the wrong tags. But projecting the English CRF model expectations preserves some uncertainties, informing the Chinese model that there is a 40% chance that \"\u4e2d\u56fd\u65e5\u62a5\" (China Daily) is an organization in this context. We would like to learn a CRF model in the foreign language that has similar expectations as the projected expectations from English. To this end, we adopt the Generalized Expectation (GE) Criteria framework introduced by Mann and McCallum (2010) . In the remainder of this section, we follow the notation used in (Druck, 2011) to explain our approach.", |
| "cite_spans": [ |
| { |
| "start": 958, |
| "end": 982, |
| "text": "Mann and McCallum (2010)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 1050, |
| "end": 1063, |
| "text": "(Druck, 2011)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 305, |
| "end": 313, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The general idea of GE is that we can express our preferences over models through constraint functions. A desired model should satisfy the imposed constraints by matching the expectations on these constraint functions with some target expectations (attained by external knowledge like lexicons or in our case transferred knowledge from English). We define a constraint function \u03c6 i,l j for each word position i and output label assignment l j . \u03c6 i,l j = 0 is a constraint in that position i cannot take label l j .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The set {l 1 , \u2022 \u2022 \u2022 , l m } denotes all possible label assignment for each y i , and m is number of label values. A i is the set of English words aligned to Chinese word i. \u03c6 i,l j are defined for all position i such that A i = \u2205. In other words, the constraint function applies only to Chinese word positions that have at least one aligned English word. Each \u03c6 i,l j (y) can be treated as a Bernoulli random variable, and we concatenate the set of all \u03c6 i,l j into a random vector", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u03c6(y), where \u03c6 k = \u03c6 i,l j if k = i * m + j.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We drop the (y) in \u03c6 for simplicity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The target expectation over \u03c6 i,l j , denoted as\u03c6 i,l j , is the expectation of assigning label l j to English word A i under the English conditional probability model. When multiple English words are aligned to the same foreign word, we average the expectations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The expectation over \u03c6 under a conditional probability model P (y|x; \u03b8) is denoted as E P (y|x;\u03b8) [\u03c6], and simplified as E \u03b8 [\u03c6] whenever it is unambiguous.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The conditional probability model P (y|x; \u03b8) in our case is defined as a standard linear-chain CRF: 5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "P (y|x; \u03b8) = 1 Z(x; \u03b8) exp n i \u03b8f (x, y i , y i\u22121 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where f is a set of feature functions; \u03b8 are the matching parameters to learn; n = |x|.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The objective function to maximize in a standard CRF is the log probability over a collection of labeled documents:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L CRF (\u03b8) = a a=1 log P (y * a |x a ; \u03b8)", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "a is the number of labeled sentences. y * is an observed label sequence. The objective function to maximize in GE is defined as the sum over all unlabeled examples on the foreign side of bitext, denoted as x b , over some cost function S between the model expectation over \u03c6 (E \u03b8 [\u03c6]) and the target expectation (\u03c6).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We choose S to be the negative L 2 2 squared error sum 6 defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "L GE (\u03b8) = n b=1 S E P (y b |x b ;\u03b8) [\u03c6(y b )],\u03c6 b = n b=1 \u2212 \u03c6 b \u2212 E \u03b8 [\u03c6(y b )] 2 (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "n is the total number of unlabeled bitext sentence pairs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "When both labeled and bitext training data are available, the joint objective is the sum of Eqn. 1 and 2. Each is computed over the labeled training data and foreign half in the bitext, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We can optimize this joint objective by computing the gradients and use a gradient-based optimization method such as L-BFGS. Gradients of L CRF decomposes down to the gradients over each labeled training example (x, y * ). Computing the gradient of L GE decomposes down to the gradients of S(E P (y|x b ;\u03b8 [\u03c6]) for each unlabeled foreign sentence x and the constraints over this example \u03c6 . The gradients can be calculated as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2202 \u2202\u03b8 S(E \u03b8 [\u03c6]) = \u2212 \u2202 \u2202\u03b8 \u03c6 \u2212 E \u03b8 [\u03c6] T \u03c6 \u2212 E \u03b8 [\u03c6] = 2 \u03c6 \u2212 E \u03b8 [\u03c6] T \u2202 \u2202\u03b8 E \u03b8 [\u03c6]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We redefine the penalty vector", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "u = 2 \u03c6 \u2212 E \u03b8 [\u03c6] to be u. \u2202 \u2202\u03b8 E \u03b8 [\u03c6]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "is a matrix where each column contains the gradients for a particular model feature \u03b8 with respect to all constraint functions \u03c6. It can be computed as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2202 \u2202\u03b8 E \u03b8 [\u03c6] = y \u03c6(y) \u2202 \u2202\u03b8 P (y|x; \u03b8) = y \u03c6(y) \u2202 \u2202\u03b8 1 Z(x; \u03b8) exp(\u03b8 T f (x, y)) = y \u03c6(y) 1 Z(x; \u03b8) \u2202 \u2202\u03b8 exp(\u03b8 T f (x, y)) + exp(\u03b8 T f (x, y)) \u2202 \u2202\u03b8 1 Z(x; \u03b8) = y \u03c6(y) P (y|x; \u03b8)f (x, y) T \u2212 P (y|x; \u03b8) y P (y |x; \u03b8)f (x, y ) T = y P (y|x; \u03b8) y \u03c6(y)f (x, y) T \u2212 y P (y|x; \u03b8)\u03c6(y) y P (y|x; \u03b8)f (x, y) T = COV P (y|x;\u03b8) (\u03c6(y), f (x, y)) (3) = E \u03b8 [\u03c6f T ] \u2212 E \u03b8 [\u03c6]E \u03b8 [f T ]", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Eqn. 3 gives the intuition of how optimization works in GE. In each iteration of L-BFGS, the model parameters are updated according to their covariance with the constraint features, scaled by the difference between current expectation and target expectation. The term E \u03b8 [\u03c6f T ] in Eqn. 4 can be computed using a dynamic programming (DP) algorithm, but solving it directly requires us to store a matrix of the same dimension as f T in each step of the DP. We can reduce the complexity by using the same trick as in (Li and Eisner, 2009) for computing Expectation Semiring. The resulting algorithm has complexity O(nm 2 ), which is the same as the standard forward-backward inference algorithm for CRF. (Druck, 2011, 93) gives full details of this derivation.", |
| "cite_spans": [ |
| { |
| "start": 516, |
| "end": 537, |
| "text": "(Li and Eisner, 2009)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 703, |
| "end": 720, |
| "text": "(Druck, 2011, 93)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLiPER", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Projecting expectations instead of one-best label assignments from English to foreign language can be thought of as a soft version of the method described in (Das and Petrov, 2011) and (Ganchev et al., 2009) . Soft projection has its advantage: when the English model is not certain about its predictions, we do not have to commit to the current best prediction. The foreign model has more freedom to form its own belief since any marginal distribution it produces would deviates from a flat distribution by just about the same amount. In general, preserving uncertainties till later is a strategy that has benefited many NLP tasks (Finkel et al., 2006) . Hard projection can also be treated as a special case in our framework. We can simply recalibrate posterior marginal of English by assigning probability mass 1 to the most likely outcome, and zero everything else out, effectively taking the argmax of the marginal at each word position. We refer to this version of expectation as the \"hard\" expectation. In the hard projection setting, GE training resembles a \"project-then-train\" style semi-supervised CRF training scheme (Yarowsky and Ngai, 2001; T\u00e4ckstr\u00f6m et al., 2013) . In such a training scheme, we project the one-best predictions of English CRF to the foreign side through word alignments, then include the newly \"tagged\" foreign data as additional training data to a standard CRF in the foreign language. Rather than projecting labels on a per-word basis, Yarowsky and Ngai (2001) also explored an alternative method for noun-phrase (NP) bracketing task that amounts to projecting the spans of NPs based on the observation that individual NPs tend to retain their sequential spans across translations. We experimented with the same method for NER, but found that this method of projecting the NE spans does not help in reducing noise and actually lowers model performance.", |
| "cite_spans": [ |
| { |
| "start": 158, |
| "end": 180, |
| "text": "(Das and Petrov, 2011)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 185, |
| "end": 207, |
| "text": "(Ganchev et al., 2009)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 632, |
| "end": 653, |
| "text": "(Finkel et al., 2006)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 1129, |
| "end": 1154, |
| "text": "(Yarowsky and Ngai, 2001;", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 1155, |
| "end": 1178, |
| "text": "T\u00e4ckstr\u00f6m et al., 2013)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 1471, |
| "end": 1495, |
| "text": "Yarowsky and Ngai (2001)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hard vs. soft Projection", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Besides the difference in projecting expectations rather than hard labels, our method and the \"project-then-train\" scheme also differ by optimizing different objectives: CRF optimizes maximum conditional likelihood of the observed label sequence, whereas GE minimizes squared error between model's expectation and \"hard\" expectation based on the observed label sequence. In the case where squared error loss is replaced with a KLdivergence loss, GE has the same effect as marginalizing out all positions with unknown projected labels, allowing more robust learning of uncertainties in the model. As we will show in the experimen- tal results in Section 4.2, soft projection in combination of the GE objective significantly outperforms the project-then-train style CRF training scheme.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hard vs. soft Projection", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "An additional source of noise comes from errors generated by the source-side English CRF models. We know that the English CRF models gives F 1 score of 81.68% on the OntoNotes dataset for English-Chinese experiment, and 90.45% on the CoNLL-03 dataset for English-German experiment. We present a simple way of modeling English-side noise by picturing the following process: the labels assigned by the English CRF model (denoted as y) are some noised version of the true labels (denoted as y * ). We can recover the probability of the true labels by marginalizing over the observed labels: P (y * |x) = y P (y * |y) * P (y|x). P (y|x) is the posterior probabilities given by the CRF model, and we can approximate P (y * |y) by the columnnormalized error confusion matrix shown in Table 1 . This source-side noise model is likely to be overly simplistic. Generally speaking, we could build much more sophisticated noising model for the sourceside, possibly conditioning on context, or capturing higher-order label sequences.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 778, |
| "end": 785, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Source-side noise", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We conduct experiments on Chinese and German NER. We evaluate CLiPER in two learning settings: weakly supervised and semi-supervised. In the weakly supervised setting, we simulate the condition of having no labeled training data, and evaluate the model learned from bitext alone. We then vary the amount of labeled data available to the model, and examine the model's learning curve. In the semi-supervised setting, we assume our model has access to the full labeled data; our goal is to improve performance of the supervised method by learning from additional bitext.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We used the latest version of Stanford NER Toolkit 7 as our base CRF model in all experiments. Features for English, Chinese and German CRFs are documented extensively in (Che et al., 2013) and (Faruqui and Pad\u00f3, 2010) and omitted here for brevity. It it worth noting that the current Stanford NER models include recent improvements from semi-supervise learning approaches that induces distributional similarity features from large word clusters. These models represent the current state-ofthe-art in supervised methods, and serve as a very strong baseline. For Chinese NER experiments, we follow the same setup as Che et al. (2013) to evaluate on the latest OntoNotes (v4.0) corpus (Hovy et al., 2006) . 8 A total of 8,249 sentences from the parallel Chinese and English Penn Treebank portion 9 are reserved for evaluation. Odd-numbered documents are used as development set, and even-numbered documents are held out as blind test set. The rest of OntoNotes annotated with NER tags are used to train the English and Chinese CRF base taggers. There are about 16k and 39k labeled sentences for Chinese and English training, respectively. The English CRF tagger trained on this training corpus gives F 1 score of 81.68% on the OntoNotes test set. Four entities types 10 are used for both Chinese and English with a IO tagging scheme. 11 The English-Chinese 7 http://www-nlp.stanford.edu/ner 8 LDC catalogue No.: LDC2011T03 9 File numbers: chtb 0001-0325, ectb 1001-1078 10 PERSON, LOCATION, ORGANIZATION and GPE. 11 We did not adopt the commonly seen BIO tagging scheme bitext comes from the Foreign Broadcast Information Service corpus (FBIS). 12 We randomly sampled 80k parallel sentence pairs to use as bitext in our experiments. It is first sentence aligned using the Champollion Tool Kit, 13 then word aligned with the BerkeleyAligner. 14 For German NER experiments, we evaluate using the standard CoNLL-03 NER corpus (Sang and Meulder, 2003) . The labeled training set has 12k and 15k sentences, containing four entity types. 15 An English CRF model is also trained on the CoNLL-03 English data with the same entity types. For bitext, we used a randomly sampled set of 40k parallel sentences from the de-en portion of the News Commentary dataset. 16 The English CRF tagger trained on CoNLL-03 English training corpus gives F 1 score of 90.4% on the CoNLL-03 test set.", |
| "cite_spans": [ |
| { |
| "start": 171, |
| "end": 189, |
| "text": "(Che et al., 2013)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 194, |
| "end": 218, |
| "text": "(Faruqui and Pad\u00f3, 2010)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 615, |
| "end": 632, |
| "text": "Che et al. (2013)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 683, |
| "end": 702, |
| "text": "(Hovy et al., 2006)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 1511, |
| "end": 1513, |
| "text": "11", |
| "ref_id": null |
| }, |
| { |
| "start": 1643, |
| "end": 1645, |
| "text": "12", |
| "ref_id": null |
| }, |
| { |
| "start": 1921, |
| "end": 1945, |
| "text": "(Sang and Meulder, 2003)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 2030, |
| "end": 2032, |
| "text": "15", |
| "ref_id": null |
| }, |
| { |
| "start": 2251, |
| "end": 2253, |
| "text": "16", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset and setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We report typed entity precision (P), recall (R) and F 1 score. Statistical significance tests are done using a paired bootstrap resampling method with 1000 iterations, averaged over 5 runs. We compare against three recently approaches that were introduced in Section 2. They are: semi-supervised learning method using factored bilingual models with Gibbs sampling (Wang et al., 2013a) ; bilingual NER using Integer Linear Programming (ILP) with bilingual constraints, by (Che et al., 2013) ; and constraint-driven bilingual-reranking approach (Burkett et al., 2010) . The code from (Che et al., 2013) and (Wang et al., 2013a) are publicly available. 17 Code from (Burkett et al., 2010) is obtained through personal communications.", |
| "cite_spans": [ |
| { |
| "start": 365, |
| "end": 385, |
| "text": "(Wang et al., 2013a)", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 472, |
| "end": 490, |
| "text": "(Che et al., 2013)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 544, |
| "end": 566, |
| "text": "(Burkett et al., 2010)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 583, |
| "end": 601, |
| "text": "(Che et al., 2013)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 606, |
| "end": 626, |
| "text": "(Wang et al., 2013a)", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 651, |
| "end": 653, |
| "text": "17", |
| "ref_id": null |
| }, |
| { |
| "start": 664, |
| "end": 686, |
| "text": "(Burkett et al., 2010)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset and setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Since the objective function in Eqn. 2 is nonconvex, we adopted the early stopping training scheme from (Turian et al., 2010) as the following: after each iteration in L-BFGS training, the model (Ramshaw and Marcus, 1999) , because when projected across swapping word alignments, the \"B-\" and \"I-\" tag distinction may not be well-preserved and may introduce additional noise.", |
| "cite_spans": [ |
| { |
| "start": 104, |
| "end": 125, |
| "text": "(Turian et al., 2010)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 195, |
| "end": 221, |
| "text": "(Ramshaw and Marcus, 1999)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset and setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "12 The FBIS corpus is a collection of radio news casts and contains translations of openly available news and information from media sources outside the United States. is evaluated against the development set; the training procedure is terminated if no improvements have been made in 20 iterations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset and setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Figure 2a and 2b show results of weakly supervised learning experiments. Quite remarkably, on Chinese test set, our proposed method (CLiPER) achieves a F 1 score of 64.4% with 80k bitext, when no labeled training data is used. In contrast, the supervised CRF baseline would require as much as 12k labeled sentences to attain the same accuracy. Results on the German test set is less striking. With no labeled data and 40k of bitext, CLiPER performs at F 1 of 60.0%, the equivalent of using 1.5k labeled examples in the supervised setting. When combined with 1k labeled examples, performance of CLiPER reaches 69%, a gain of over 5% absolute over supervised CRF. We also notice that supervised CRF model learns much faster in German than Chinese. This result is not too surprising, since it is well recognized that Chinese NER is more challenging than German or English. The best supervised results for Chinese is 10-20% (F 1 score) behind best German and English supervised results. Chinese NER relies more on lexicalized features, and therefore needs more labeled data to achieve good coverage. The results suggest that CLiPER seems to be very effective at transferring lexical knowledge from English to Chinese. Figure 2c and 2d compares soft GE projection with hard GE projection and the \"project-then-train\" style CRF training scheme (cf. Section 3.2). We observe that both soft and hard GE projection significantly outperform the \"project-then-train\" style training scheme. The difference is especially pronounced on the Chinese results when fewer labeled examples are available. Soft projection gives better accuracy than hard projection when no labeled data is available, and also has a faster learning rate.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1214, |
| "end": 1223, |
| "text": "Figure 2c", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Weakly supervised results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Incorporating source-side noise using the method described in Section 3.3 gives a small improvement on Chinese with supervised data, increasing F 1 score from 64.40% to 65.50%. This improvement is statistically significant at 92% confidence interval. However, on the German data, we observe a tiny decrease with no statistical significance in F 1 score, dropping from 59.88% to 59.66%. A likely explanation of the difference is that the English CRF model in the English-Chinese experiment, which is trained on OntoNotes data, has a much higher error rate (18.32%) than the English CRF model in the English-German experiment trained on CoNLL-03 (9.55%). Therefore, modeling noise in the English-Chinese case is likely to have a greater effect than the English-German case.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weakly supervised results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In the semi-supervised experiments, we let the CRF model use the full set of labeled examples in addition to the unlabeled bitext. Results on the test set are shown in Table 2 . All semi-supervised baselines are tested with the same number of unlabeled bitext as CLiPER in each language. The \"project-thentrain\" semi-supervised training scheme severely hurts performance on Chinese, but gives a small improvement on German. Moreover, on Chinese it learns to achieve high precision but at a significant loss in recall. On German its behavior is the opposite. Such drastic and erratic imbalance suggest that this method is not robust or reliable. The other three semi-supervised baselines (row 3-5) all show improvements over the CRF baseline, consistent with their reported results. CLIPER s gives the best results on both Chinese and German, yielding statistically significant improvements over all baselines except for CWD13 on German. The hard projection version of CLiPER also gives sizable gain over CRF. However, in comparison, CLIPER s is superior.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 168, |
| "end": 175, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semi-supervised results", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The improvements of CLIPER s over CRF on Chinese test set is over 2.8% in absolute F 1 . The improvement over CRF on German is almost a percent. To our knowledge, these are the best reported numbers on the OntoNotes Chinese and CoNLL-03 German datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semi-supervised results", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Another advantage of our proposed approach is efficiency. Because we eliminated the previous multistage \"uptraining\" paradigm, but instead integrating the semi-supervised and supervised objective into one joint objective, we are able to attain significant speed improvements over all methods except CRF ptt . Table 3 shows the required training time. Table 2 : Test set Chinese, German NER results. Best number of each column is highlighted in bold. CRF is the supervised baseline. CRF ptt is the \"project-then-train\" semi-supervised scheme for CRF. BPBK10 is (Burkett et al., 2010) , WCD13 is (Wang et al., 2013a) , CWD13A is (Che et al., 2013) , and WCD13B is (Wang et al., 2013b) . CLIPER s and CLIPER h are the soft and hard projections. \u00a7 indicates F 1 scores that are statistically significantly better than CRF baseline at 99.5% confidence level; marks significance over CRF ptt with 99.5% confidence; \u2020 and \u2021 marks significance over WCD13 with 99.9% and 94% confidence; and marks significance over CWD13 with 99.7% confidence; * marks significance over BPBK10 with 99.9% confidence. Figure 2e and 2f give two examples of cross-lingual projection methods in action. Both examples have a named entity that immediately proceeds the word \"\u7eaa\u5ff5\u7891\" (monument) in the Chinese sentence. In Figure 2e , the word \"\u9ad8\u5c97\" has literal meaning of a hillock located at a high position, which also happens to be the name of a former vice president of China. Without having previously observed this word as a person name in the labeled training data, the CRF model does not have enough evidence to believe that this is a PERSON, instead of LOCATION. But the aligned words in English (\"Gao Gang\") are clearly part of a person name as they were preceded by a title (\"Vice President\"). The English model has high expectation that the aligned Chinese word of \"Gao Gang\" is also a PERSON. Therefore, projecting the English expectations to Chinese provides a strong clue to help disambiguating this word. Figure 2f gives another example: the word \"\u9ec4\u6cb3\"(Huang He, the Yellow River of China) can Chinese German CRF 19m30s 7m15s CRFptt 34m2s 12m45s WCD13 3h17m 1h1m CWD13a 16h42m 4h49m CWD13b 16h42m 4h49m BPBK10 6h16m 2h42m CLiPER h 1h28m 16m30s CLiPERs 1h40m 18m51s Table 3 : Timing stats during model training.", |
| "cite_spans": [ |
| { |
| "start": 560, |
| "end": 582, |
| "text": "(Burkett et al., 2010)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 594, |
| "end": 614, |
| "text": "(Wang et al., 2013a)", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 627, |
| "end": 645, |
| "text": "(Che et al., 2013)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 662, |
| "end": 682, |
| "text": "(Wang et al., 2013b)", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 309, |
| "end": 316, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 351, |
| "end": 358, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 1091, |
| "end": 1100, |
| "text": "Figure 2e", |
| "ref_id": null |
| }, |
| { |
| "start": 1287, |
| "end": 1296, |
| "text": "Figure 2e", |
| "ref_id": null |
| }, |
| { |
| "start": 1985, |
| "end": 1994, |
| "text": "Figure 2f", |
| "ref_id": null |
| }, |
| { |
| "start": 2073, |
| "end": 2253, |
| "text": "Chinese German CRF 19m30s 7m15s CRFptt 34m2s 12m45s WCD13 3h17m 1h1m CWD13a 16h42m 4h49m CWD13b 16h42m 4h49m BPBK10 6h16m 2h42m CLiPER h 1h28m 16m30s CLiPERs 1h40m", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 2261, |
| "end": 2268, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Efficiency", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "be confused with a person name since \"\u9ec4\"(Huang or Hwang) is also a common Chinese last name. 18 . Again, knowing the translation in English, which has the indicative word \"River\" in it, helps disambiguation.", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 95, |
| "text": "18", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The CRF ptt and CLIPER h methods successfully labeled these two examples correctly, but failed to produce the correct label for the example in Figure 1. On the other hand, a model trained with the CLIPER s method does correctly label both entities in Figure 1 , demonstrating the merits of the soft projection method.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 143, |
| "end": 149, |
| "text": "Figure", |
| "ref_id": null |
| }, |
| { |
| "start": 251, |
| "end": 259, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We introduced a domain and language independent semi-supervised method for training discriminative models by projecting expectations across bitext. Experiments on Chinese and German NER show that our method, learned over bitext alone, can rival performance of supervised models trained with thousands of labeled examples. Furthermore, applying our method in a setting where all labeled examples are available also shows improvements over state-ofthe-art supervised methods. Our experiments also showed that soft expectation projection is more favorable to hard projection. This technique can be generalized to all sequence labeling tasks, and can be extended to include more complex constraints. For future work, we plan to apply this method to more language pairs and also explore data selection strategies and modeling alignment uncertainties.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "For experimental purposes, we designate English as the resource-rich language, and other languages of interest as \"foreign\". In our experiments, we simulate the resource-poor scenario using Chinese and German, even though in reality these two languages are quite rich in resources.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Transactions of the Association for Computational Linguistics, 2 (2014) 55-66. Action Editor: Lillian Lee. Submitted 9/2013; Revised 12/2013; Published 2/2014. c 2014 Association for Computational Linguistics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "A multi-view interpretation of self-training is that the selftagged additional data offers new views to learners trained on existing labeled data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "This is a city in the state of Georgia in USA, famous for its golf courses. It is ambiguous since both Augusta and Georgia can also be used as person names.4 The different terminology employed by GE and PR may be confusing to discerning readers, but the \"expectation\" in the context of GE means the same thing as \"marginal posterior\" as in PR.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We simplify notation by dropping the L2 regularizer in the CRF definition, but apply it in our experiments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In general, other loss functions such as KL-divergence can also be used for S. We found L 2 2 to work well in practice.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.statmt.org/wmt13/ training-parallel-nc-v8.tgz 17 https://github.com/stanfordnlp/CoreNLP", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In fact, a people search of the name \u9ec4\u6cb3 on the most popular Chinese social network (renren.com) returns over 13,000 matches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The authors would like to thank Jennifer Gillenwater for a discussion that inspired this work, Behrang Mohit and Nathan Schneider for their help with the Arabic NER data, and David Burkett for providing the source code of their work for comparison. We would also like to thank editor Lillian Lee and the three anonymous reviewers for their valuable comments and suggestions. We gratefully acknowledge the support of the U.S. Defense Advanced Research Projects Agency (DARPA) Broad Operational Language Translation (BOLT) program through IBM. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of DARPA, or the US government.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Head-transducer models for speech translation and their automatic acquisition from bilingual data. Machine Translation", |
| "authors": [ |
| { |
| "first": "Hiyan", |
| "middle": [], |
| "last": "Alshawi", |
| "suffix": "" |
| }, |
| { |
| "first": "Srinivas", |
| "middle": [], |
| "last": "Bangalore", |
| "suffix": "" |
| }, |
| { |
| "first": "Shona", |
| "middle": [], |
| "last": "Douglas", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hiyan Alshawi, Srinivas Bangalore, and Shona Douglas. 2000. Head-transducer models for speech translation and their automatic acquisition from bilingual data. Machine Translation, 15.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A highperformance semi-supervised learning method for text chunking", |
| "authors": [ |
| { |
| "first": "Rie", |
| "middle": [], |
| "last": "Kubota", |
| "suffix": "" |
| }, |
| { |
| "first": "Ando", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Tong", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rie Kubota Ando and Tong Zhang. 2005. A high- performance semi-supervised learning method for text chunking. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Alternating projections for learning with expectation constraints", |
| "authors": [ |
| { |
| "first": "Kedar", |
| "middle": [], |
| "last": "Bellare", |
| "suffix": "" |
| }, |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Druck", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of UAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kedar Bellare, Gregory Druck, and Andrew McCallum. 2009. Alternating projections for learning with expec- tation constraints. In Proceedings of UAI.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Combining labeled and unlabeled data with co-training", |
| "authors": [ |
| { |
| "first": "Avrim", |
| "middle": [], |
| "last": "Blum", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of COLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Avrim Blum and Tom Mitchell. 1998. Combining la- beled and unlabeled data with co-training. In Proceed- ings of COLT.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Two languages are better than one (for syntactic parsing)", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Burkett", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Burkett and Dan Klein. 2008. Two languages are better than one (for syntactic parsing). In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Learning better monolingual models with unannotated bilingual text", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Burkett", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Blitzer", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Burkett, Slav Petrov, John Blitzer, and Dan Klein. 2010. Learning better monolingual models with unan- notated bilingual text. In Proceedings of CoNLL.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Coupled semi-supervised learning for information extraction", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Carlson", |
| "suffix": "" |
| }, |
| { |
| "first": "Justin", |
| "middle": [], |
| "last": "Betteridge", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [ |
| "C" |
| ], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Estevam", |
| "middle": [ |
| "R" |
| ], |
| "last": "Hruschka", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom", |
| "middle": [ |
| "M" |
| ], |
| "last": "Mitchell", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of WSDM", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Carlson, Justin Betteridge, Richard C. Wang, Es- tevam R. Hruschka Jr., and Tom M. Mitchell. 2010. Coupled semi-supervised learning for information ex- traction. In Proceedings of WSDM.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Guiding semi-supervision with constraintdriven learning", |
| "authors": [ |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Ratinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2007. Guiding semi-supervision with constraint- driven learning. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Named entity recognition with bilingual constraints", |
| "authors": [ |
| { |
| "first": "Wanxiang", |
| "middle": [], |
| "last": "Che", |
| "suffix": "" |
| }, |
| { |
| "first": "Mengqiu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wanxiang Che, Mengqiu Wang, and Christopher D. Man- ning. 2013. Named entity recognition with bilingual constraints. In Proceedings of NAACL.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Unsupervised models for named entity classification", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoram", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins and Yoram Singer. 1999. Unsupervised models for named entity classification. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Unsupervised partof-speech tagging with bilingual graph-based projections", |
| "authors": [ |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dipanjan Das and Slav Petrov. 2011. Unsupervised part- of-speech tagging with bilingual graph-based projec- tions. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Highperformance semi-supervised learning using discriminatively constrained generative models", |
| "authors": [ |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Druck", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gregory Druck and Andrew McCallum. 2010. High- performance semi-supervised learning using discrim- inatively constrained generative models. In Proceed- ings of ICML.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Leveraging existing resources using generalized expectation criteria", |
| "authors": [ |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Druck", |
| "suffix": "" |
| }, |
| { |
| "first": "Gideon", |
| "middle": [], |
| "last": "Mann", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of NIPS Workshop on Learning Problem Design", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gregory Druck, Gideon Mann, and Andrew McCallum. 2007. Leveraging existing resources using generalized expectation criteria. In Proceedings of NIPS Workshop on Learning Problem Design.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Active learning by labeling features", |
| "authors": [ |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Druck", |
| "suffix": "" |
| }, |
| { |
| "first": "Burr", |
| "middle": [], |
| "last": "Settles", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gregory Druck, Burr Settles, and Andrew McCallum. 2009. Active learning by labeling features. In Pro- ceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Generalized Expectation Criteria for Lightly Supervised Learning", |
| "authors": [ |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Druck", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gregory Druck. 2011. Generalized Expectation Criteria for Lightly Supervised Learning. Ph.D. thesis, Univer- sity of Massachusetts Amherst.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Training and evaluating a German named entity recognizer with semantic generalization", |
| "authors": [ |
| { |
| "first": "Manaal", |
| "middle": [], |
| "last": "Faruqui", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Pad\u00f3", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of KONVENS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Manaal Faruqui and Sebastian Pad\u00f3. 2010. Training and evaluating a German named entity recognizer with se- mantic generalization. In Proceedings of KONVENS.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Solving the problem of cascading errors: Approximate bayesian inference for linguistic annotation pipelines", |
| "authors": [ |
| { |
| "first": "Jenny", |
| "middle": [ |
| "Rose" |
| ], |
| "last": "Finkel", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jenny Rose Finkel, Christopher D. Manning, and An- drew Y. Ng. 2006. Solving the problem of cascading errors: Approximate bayesian inference for linguistic annotation pipelines. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Automatically inducing a part-of-speech tagger by projecting from multiple source languages across aligned corpora", |
| "authors": [ |
| { |
| "first": "Victoria", |
| "middle": [], |
| "last": "Fossum", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Abney", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of IJCNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Victoria Fossum and Steven Abney. 2005. Automatically inducing a part-of-speech tagger by projecting from multiple source languages across aligned corpora. In Proceedings of IJCNLP.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Crosslingual discriminative learning of sequence models with posterior regularization", |
| "authors": [ |
| { |
| "first": "Kuzman", |
| "middle": [], |
| "last": "Ganchev", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kuzman Ganchev and Dipanjan Das. 2013. Cross- lingual discriminative learning of sequence models with posterior regularization. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Dependency grammar induction via bitext projection constraints", |
| "authors": [ |
| { |
| "first": "Kuzman", |
| "middle": [], |
| "last": "Ganchev", |
| "suffix": "" |
| }, |
| { |
| "first": "Jennifer", |
| "middle": [], |
| "last": "Gillenwater", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency grammar induction via bitext pro- jection constraints. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Posterior regularization for structured latent variable models", |
| "authors": [ |
| { |
| "first": "Kuzman", |
| "middle": [], |
| "last": "Ganchev", |
| "suffix": "" |
| }, |
| { |
| "first": "Jo", |
| "middle": [], |
| "last": "Ao Gra\u00e7a", |
| "suffix": "" |
| }, |
| { |
| "first": "Jennifer", |
| "middle": [], |
| "last": "Gillenwater", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "JMLR", |
| "volume": "10", |
| "issue": "", |
| "pages": "2001--2049", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kuzman Ganchev, Jo ao Gra\u00e7a, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for struc- tured latent variable models. JMLR, 10:2001-2049.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "New Directions in Semisupervised Learning", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Andrew", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew B. Goldberg. 2010. New Directions in Semi- supervised Learning. Ph.D. thesis, University of Wisconsin-Madison.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "OntoNotes: the 90% solution", |
| "authors": [ |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Mitchell", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of NAACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: the 90% solution. In Proceedings of NAACL-HLT.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "The Unsupervised Learning of Natural Language Structure", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Klein. 2005. The Unsupervised Learning of Natural Language Structure. Ph.D. thesis, Stanford Univer- sity.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [ |
| "D" |
| ], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [ |
| "C N" |
| ], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilis- tic models for segmenting and labeling sequence data. In Proceedings of ICML.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "First-and second-order expectation semirings with applications to minimumrisk training on translation forests", |
| "authors": [ |
| { |
| "first": "Zhifei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhifei Li and Jason Eisner. 2009. First-and second-order expectation semirings with applications to minimum- risk training on translation forests. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Wiki-ly supervised part-of-speech tagging", |
| "authors": [ |
| { |
| "first": "Shen", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Jo", |
| "middle": [], |
| "last": "Ao Gra\u00e7a", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of EMNLP-CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shen Li, Jo ao Gra\u00e7a, and Ben Taskar. 2012. Wiki-ly supervised part-of-speech tagging. In Proceedings of EMNLP-CoNLL.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Semi-supervised learning for natural language", |
| "authors": [ |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Percy Liang. 2005. Semi-supervised learning for natural language. Master's thesis, Massachusetts Institute of Technology.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Generalized expectation criteria for semi-supervised learning with weakly labeled data", |
| "authors": [ |
| { |
| "first": "Gideon", |
| "middle": [], |
| "last": "Mann", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "JMLR", |
| "volume": "11", |
| "issue": "", |
| "pages": "955--984", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gideon Mann and Andrew McCallum. 2010. General- ized expectation criteria for semi-supervised learning with weakly labeled data. JMLR, 11:955-984.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Effective self-training for parsing", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mcclosky", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of NAACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceed- ings of NAACL-HLT.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Multilingual part-ofspeech tagging: Two unsupervised approaches", |
| "authors": [ |
| { |
| "first": "Tahira", |
| "middle": [], |
| "last": "Naseem", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Snyder", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Eisenstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "JAIR", |
| "volume": "36", |
| "issue": "", |
| "pages": "1076--9757", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tahira Naseem, Benjamin Snyder, Jacob Eisenstein, and Regina Barzilay. 2009. Multilingual part-of- speech tagging: Two unsupervised approaches. JAIR, 36:1076-9757.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Uptraining for accurate deterministic question parsing", |
| "authors": [ |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Pi-Chuan", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Ringgaard", |
| "suffix": "" |
| }, |
| { |
| "first": "Hiyan", |
| "middle": [], |
| "last": "Alshawi", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Slav Petrov, Pi-Chuan Chang, Michael Ringgaard, and Hiyan Alshawi. 2010. Uptraining for accurate deter- ministic question parsing. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Text chunking using transformation-based learning", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Lance", |
| "suffix": "" |
| }, |
| { |
| "first": "Mitchell", |
| "middle": [ |
| "P" |
| ], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Natural Language Processing Using Very Large Corpora", |
| "volume": "11", |
| "issue": "", |
| "pages": "157--176", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lance A. Ramshaw and Mitchell P. Marcus. 1999. Text chunking using transformation-based learning. Natu- ral Language Processing Using Very Large Corpora, 11:157-176.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Unified expectation maximization", |
| "authors": [ |
| { |
| "first": "Rajhans", |
| "middle": [], |
| "last": "Samdani", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rajhans Samdani, Ming-Wei Chang, and Dan Roth. 2012. Unified expectation maximization. In Proceed- ings of NAACL.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Introduction to the CoNLL-2003 shared task: languageindependent named entity recognition", |
| "authors": [ |
| { |
| "first": "Erik", |
| "middle": [ |
| "F" |
| ], |
| "last": "Tjong", |
| "suffix": "" |
| }, |
| { |
| "first": "Kim", |
| "middle": [], |
| "last": "Sang", |
| "suffix": "" |
| }, |
| { |
| "first": "Fien", |
| "middle": [], |
| "last": "De Meulder", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. In- troduction to the CoNLL-2003 shared task: language- independent named entity recognition. In Proceedings of CoNLL.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "A co-regularization approach to semisupervised learning with multiple views", |
| "authors": [ |
| { |
| "first": "Vikas", |
| "middle": [], |
| "last": "Sindhwani", |
| "suffix": "" |
| }, |
| { |
| "first": "Partha", |
| "middle": [], |
| "last": "Niyogi", |
| "suffix": "" |
| }, |
| { |
| "first": "Mikhail", |
| "middle": [], |
| "last": "Belkin", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ICML Workshop on Learning with Multiple Views, International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin. 2005. A co-regularization approach to semi- supervised learning with multiple views. In Proceed- ings of ICML Workshop on Learning with Multiple Views, International Conference on Machine Learn- ing.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Novel Estimation Methods for Unsupervised Discovery of Latent Structure in Natural Language Text", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Noah", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noah A. Smith. 2006. Novel Estimation Methods for Unsupervised Discovery of Latent Structure in Natu- ral Language Text. Ph.D. thesis, Johns Hopkins Uni- versity.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Unsupervised multilingual grammar induction", |
| "authors": [ |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Snyder", |
| "suffix": "" |
| }, |
| { |
| "first": "Tahira", |
| "middle": [], |
| "last": "Naseem", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Benjamin Snyder, Tahira Naseem, and Regina Barzilay. 2009. Unsupervised multilingual grammar induction. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Semi-supervised sequential labeling and segmentation using giga-word scale unlabeled data", |
| "authors": [ |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Suzuki", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Isozaki", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jun Suzuki and Hideki Isozaki. 2008. Semi-supervised sequential labeling and segmentation using giga-word scale unlabeled data. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Token and type constraints for cross-lingual part-of-speech tagging", |
| "authors": [ |
| { |
| "first": "Oscar", |
| "middle": [], |
| "last": "T\u00e4ckstr\u00f6m", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mc-Donald", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, Slav Petrov, Ryan Mc- Donald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Word representations: A simple and general method for semi-supervised learning", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Turian", |
| "suffix": "" |
| }, |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Ratinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Effective bilingual constraints for semisupervised learning of named entity recognizers", |
| "authors": [ |
| { |
| "first": "Mengqiu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Wanxiang", |
| "middle": [], |
| "last": "Che", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mengqiu Wang, Wanxiang Che, and Christopher D. Man- ning. 2013a. Effective bilingual constraints for semi- supervised learning of named entity recognizers. In Proceedings of AAAI.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Joint word alignment and bilingual named entity recognition using dual decomposition", |
| "authors": [ |
| { |
| "first": "Mengqiu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Wanxiang", |
| "middle": [], |
| "last": "Che", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mengqiu Wang, Wanxiang Che, and Christopher D. Man- ning. 2013b. Joint word alignment and bilingual named entity recognition using dual decomposition. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "A backoff model for bootstrapping resources for non-english languages", |
| "authors": [ |
| { |
| "first": "Chenhai", |
| "middle": [], |
| "last": "Xi", |
| "suffix": "" |
| }, |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Hwa", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of HLT-EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chenhai Xi and Rebecca Hwa. 2005. A backoff model for bootstrapping resources for non-english languages. In Proceedings of HLT-EMNLP.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Inducing multilingual POS taggers and NP bracketers via robust projection across aligned corpora", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Grace", |
| "middle": [], |
| "last": "Ngai", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Yarowsky and Grace Ngai. 2001. Inducing mul- tilingual POS taggers and NP bracketers via robust projection across aligned corpora. In Proceedings of NAACL.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Unsupervised word sense disambiguation rivaling supervised methods", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Yarowsky. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In Proceed- ings of ACL.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "Diagram illustrating the projection of model expectation from English to Chinese. The posterior probabilities assigned by the English CRF model is shown above each English word; automatically induced word alignments are shown in red; the correct projected labels for Chinese words are shown in green, and incorrect labels are shown in red.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "html": null, |
| "content": "<table><tr><td>O:1.0000</td><td>PER:0.6369</td><td>PER:0.6377</td><td>O:1.0000</td><td>O:1.0000</td><td/><td colspan=\"2\">PER:0.5925</td><td>PER:0.5925</td></tr><tr><td>LOC:0.0000</td><td>LOC:0.3250</td><td>LOC:0.3256</td><td>LOC:0.0000</td><td>LOC:0.0000</td><td/><td colspan=\"2\">ORG:0.4060</td><td>ORG:0.4061</td></tr><tr><td>ORG:0.0000</td><td>ORG:0.0308</td><td>ORG:0.0307</td><td>ORG:0.0000</td><td>ORG:0.0000</td><td/><td colspan=\"2\">O:0.0012</td><td>O:0.0011</td></tr><tr><td>GPE:0.0000</td><td>GPE:0.0042</td><td>GPE:0.0042</td><td>GPE:0.0000</td><td>GPE:0.0000</td><td/><td colspan=\"2\">LOC:0.0003</td><td>LOC:0.0003</td></tr><tr><td>PER:0.0000</td><td>O:0.0032</td><td>O:0.0037</td><td>PER:0.0000</td><td>PER:0.0000</td><td/><td colspan=\"2\">GPE:0.0000</td><td>GPE:0.0000</td></tr><tr><td>\u5728</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">PER:0.6373</td><td>O:1.0000</td><td/><td>O:1.0000</td><td>PER:0.5925</td><td>PER:0.5925</td><td>O:1.0000</td></tr><tr><td colspan=\"2\">LOC:0.3253</td><td>LOC:0.0000</td><td colspan=\"2\">LOC:0.0000</td><td>ORG:0.4060</td><td>ORG:0.4061</td><td>LOC:0.0000</td></tr><tr><td colspan=\"2\">ORG:0.0307</td><td>ORG:0.0000</td><td colspan=\"2\">ORG:0.0000</td><td>O:0.0012</td><td>O:0.0011</td><td>ORG:0.0000</td></tr><tr><td colspan=\"2\">GPE:0.0042</td><td>GPE:0.0000</td><td colspan=\"2\">GPE:0.0000</td><td>LOC:0.0003</td><td>LOC:0.0003</td><td>GPE:0.0000</td></tr><tr><td colspan=\"2\">O:0.0035</td><td>PER:0.0000</td><td colspan=\"2\">PER:0.0000</td><td>GPE:0.0000</td><td>GPE:0.0000</td><td>PER:0.0000</td></tr></table>", |
| "text": "a reception in Luobu Linka . . . . . . met with representatives of Zhongguo Ribao \u7f57\u5e03\u6797\u5361 \u4e3e\u884c \u7684 \u62db\u5f85\u4f1a . . . . . . \u4f1a\u89c1 \u4e86 \u4e2d\u56fd \u65e5\u62a5 \u4ee3\u8868", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF2": { |
| "html": null, |
| "content": "<table><tr><td>: Raw counts in the error confusion matrix of</td></tr><tr><td>English CRF models. Top table contains the counts</td></tr><tr><td>on OntoNotes test data, and bottom table contains</td></tr><tr><td>CoNLL-03 test data counts. Rows are the true la-</td></tr><tr><td>bels and columns are the observed labels. For exam-</td></tr><tr><td>ple, item at row 2, column 3 of the top table reads:</td></tr><tr><td>we observed 5 times where the true label should be</td></tr><tr><td>PERSON, but English CRF model output label LO-</td></tr><tr><td>CATION.</td></tr></table>", |
| "text": "", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF4": { |
| "html": null, |
| "content": "<table><tr><td># of labeled training sentences [k] CRF projection CLiPPER hard CLiPPER soft (c) Soft vs. Hard on Chinese Test [\u9ad8\u5c97] \u7eaa\u5ff5\u7891 A monument commemorating [Vice President Gao GangPER ] was completed in [HengshanLOC ] 0 1 2 3 4 5 6 7 54 56 58 60 62 64 66 68 70 72 74 76 78 80 # of labeled training sentences [k] 8 9 10 11 12 F1 score [%] CRF projection CLiPPER hard CLiPPER soft (d) Soft vs. Hard on German Test \u5728 [\u6a2a\u5c71] \u843d\u6210 (e) Word proceeding \"monument\" is PERSON [\u789b\u53e3] [\u6bdb\u4e3b\u5e2d] \u4e1c\u6e21 [\u9ec4\u6cb3] \u7eaa\u5ff5\u7891 \u7b80\u4ecb Introduction of [(f) Word proceeding \"monument\" is LOCATION Figure 2: Chinese F1 score [%] German P R F 1 P R F 1</td></tr></table>", |
| "text": "Qikou LOC ] [Chairman Mao PER ] [Yellow River LOC ] crossing monument Top four figures show performance curves of CLiPER with varying amounts of available labeled training data in a weakly supervised setting. Vertical axes show the F 1 score on the test set. Performance curves of supervised CRF and \"project-then-train\" CRF are plotted for comparison. Bottom two figures are examples of aligned sentence pairs in Chinese and English. CRF 79.09 63.59 70.50 86.69 71.30 78.25 CRFptt 84.01 45.29 58.85 81.50 75.56 78.41 BPBK10 79.25 65.67 71.83 84.00 72.17 77.64 CWD13 81.31 65.50 72.55 85.99 72.98 78.95 WCD13a 80.31 65.78 72.33 85.98 72.37 78.59 WCD13b 78.55 66.54 72.05 85.19 72.98 78.62 CLiPER h 83.67 64.80 73.04 \u00a7 \u2021 86.52 72.02 78.61 * CLiPERs 82.57 65.99 73.35 \u00a7 \u2020 * 87.11 72.56 79.17 \u2021 * \u00a7", |
| "type_str": "table", |
| "num": null |
| } |
| } |
| } |
| } |