ACL-OCL / Base_JSON /prefixH /json /H01 /H01-1035.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H01-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:31:06.173852Z"
},
"title": "Inducing Multilingual Text Analysis Tools via Robust Projection across Aligned Corpora",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"region": "MD",
"country": "USA"
}
},
"email": "yarowsky@cs.jhu.edu"
},
{
"first": "Grace",
"middle": [],
"last": "Ngai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {
"postCode": "21218",
"settlement": "Baltimore",
"region": "MD",
"country": "USA"
}
},
"email": ""
},
{
"first": "Richard",
"middle": [],
"last": "Wicentowski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {
"postCode": "21218",
"settlement": "Baltimore",
"region": "MD",
"country": "USA"
}
},
"email": "richardw@cs.jhu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a system and set of algorithms for automatically inducing stand-alone monolingual part-of-speech taggers, base noun-phrase bracketers, named-entity taggers and morphological analyzers for an arbitrary foreign language. Case studies include French, Chinese, Czech and Spanish. Existing text analysis tools for English are applied to bilingual text corpora and their output projected onto the second language via statistically derived word alignments. Simple direct annotation projection is quite noisy, however, even with optimal alignments. Thus this paper presents noise-robust tagger, bracketer and lemmatizer training procedures capable of accurate system bootstrapping from noisy and incomplete initial projections. Performance of the induced stand-alone part-of-speech tagger applied to French achieves 96% core part-of-speech (POS) tag accuracy, and the corresponding induced noun-phrase bracketer exceeds 91% F-measure. The induced morphological analyzer achieves over 99% lemmatization accuracy on the complete French verbal system. This achievement is particularly noteworthy in that it required absolutely no hand-annotated training data in the given language, and virtually no language-specific knowledge or resources beyond raw text. Performance also significantly exceeds that obtained by direct annotation projection.",
"pdf_parse": {
"paper_id": "H01-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a system and set of algorithms for automatically inducing stand-alone monolingual part-of-speech taggers, base noun-phrase bracketers, named-entity taggers and morphological analyzers for an arbitrary foreign language. Case studies include French, Chinese, Czech and Spanish. Existing text analysis tools for English are applied to bilingual text corpora and their output projected onto the second language via statistically derived word alignments. Simple direct annotation projection is quite noisy, however, even with optimal alignments. Thus this paper presents noise-robust tagger, bracketer and lemmatizer training procedures capable of accurate system bootstrapping from noisy and incomplete initial projections. Performance of the induced stand-alone part-of-speech tagger applied to French achieves 96% core part-of-speech (POS) tag accuracy, and the corresponding induced noun-phrase bracketer exceeds 91% F-measure. The induced morphological analyzer achieves over 99% lemmatization accuracy on the complete French verbal system. This achievement is particularly noteworthy in that it required absolutely no hand-annotated training data in the given language, and virtually no language-specific knowledge or resources beyond raw text. Performance also significantly exceeds that obtained by direct annotation projection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A fundamental roadblock to developing statistical taggers, bracketers and other analyzers for many of the world's 200+ major languages is the shortage or absence of annotated training data for the large majority of these languages. Ideally, one would like to lever- age the large existing investments in annotated data and tools for resource-rich languages (such as English and Japanese) to overcome the annotated resource shortage in other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TASK OVERVIEW",
"sec_num": "1."
},
{
"text": "To show the broad potential of our approach and methods, this paper will investigate four fundamental language analysis tasks: POS tagging, base noun phrase (baseNP) bracketing, named entity tagging, and inflectional morphological analysis, as illustrated in Figures 1 and 2 . These bedrock tools are important components of the language analysis pipelines for many applications, and their low cost extension to new languages, as described here, can serve as a broadly useful enabling resource.",
"cite_spans": [],
"ref_spans": [
{
"start": 259,
"end": 274,
"text": "Figures 1 and 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "TASK OVERVIEW",
"sec_num": "1."
},
{
"text": "Previous research on the word alignment of parallel corpora has tended to focus on their use in translation model training for MT rather than on monolingual applications. One exception is bilingual parsing. Wu (1995 Wu ( , 1997 investigated the use of concurrent parsing of parallel corpora in a transduction inversion framework, helping to resolve attachment ambiguities in one language by the coupled parsing state in the second language. Jones and Havrilla (1998) utilized similar joint parsing techniques (twisted-pair grammars) for word reordering in target language generation.",
"cite_spans": [
{
"start": 207,
"end": 215,
"text": "Wu (1995",
"ref_id": "BIBREF12"
},
{
"start": 216,
"end": 227,
"text": "Wu ( , 1997",
"ref_id": "BIBREF13"
},
{
"start": 441,
"end": 466,
"text": "Jones and Havrilla (1998)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BACKGROUND",
"sec_num": "2."
},
{
"text": "However, with these exceptions in the field of parsing, to our knowledge no one has previously used linguistic annotation projection via aligned bilingual corpora to induce traditional standalone monolingual text analyzers in other languages. Thus both our proposed projection and induction methods, and their application to multilingual POS tagging, named-entity classification and morphological analysis induction, appears to be highly novel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BACKGROUND",
"sec_num": "2."
},
{
"text": "The data sets used in these experiments included the English-French Canadian Hansards, the English-Chinese Hong Kong Hansards, and parallel Czech-English Reader's Digest collection. In addition, multiple versions of the Bible were used, including the French Douay-Rheims Bible, Spanish Reina Valera Bible, and three English Bible Versions (King James, New International and Revised Standard), automatically verse-aligned in multiple pairings. All corpora were automatically word-aligned by the now publicly available EGYPT system (Al-Onaizan et al., 1999), based on IBM's Model 3 statistical MT formalism (Brown et al., 1990) . The tagging and bracketing tasks utilized approximately 2 million words in each language, with the sample sizes for morphology induction given in Table 3 . All word alignments utilized strictly rawword-based model variants for English/French/Spanish/Czech and character-based model variants for Chinese, with no use of morphological analysis or stemming, POS-tagging, bracketing or dictionary resources.",
"cite_spans": [
{
"start": 605,
"end": 625,
"text": "(Brown et al., 1990)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 774,
"end": 781,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "DATA RESOURCES",
"sec_num": "3."
},
{
"text": "Part-of-speech tagging is the first of four applications covered in this paper. The goal of this work is to project POS analysis capabilities from one language to another via word-aligned parallel bilingual corpora. To do so, we use an existing POS tagger (e.g. Brill, 1995) to annotate the English side of the parallel corpus. Then, as illustrated in Figure 1 for Chinese and French, the raw tags are transferred via the word alignments, yielding an extremely noisy initial training set for the 2nd language. The third crucial step is to generalize from these noisy projected annotations in a robust way, yielding a stand-alone POS tagger for the new language that is considerably more accurate than the initial projected tags.",
"cite_spans": [
{
"start": 262,
"end": 274,
"text": "Brill, 1995)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 352,
"end": 360,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "PART-OF-SPEECH TAGGER INDUCTION",
"sec_num": "4."
},
{
"text": "Additional details of this algorithm are given in Yarowsky and Ngai (2001) . Due to lack of space, the following sections will serve primarily as an overview of the algorithm and its salient issues.",
"cite_spans": [
{
"start": 50,
"end": 74,
"text": "Yarowsky and Ngai (2001)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PART-OF-SPEECH TAGGER INDUCTION",
"sec_num": "4."
},
{
"text": "First, because of considerable cross-language differences in finegrained tag set inventories, this work focuses on accurately assigning core POS categories (e.g. noun, verb, adverb, adjective, etc.), with additional distinctions in verb tense, noun number and pronoun type as captured in the English tagset inventory. Although impoverished relative to some languages, and incapable of resolving details such as grammatical gender, this Brown-corpus-based tagset granularity is sufficient for many applications. Furthermore, many finer-grained part-of-speech distinctions are resolved primarily by morphology, as handled in Section 7. Finally, if one desires to induce a finer-grained tagging capability for case, for example, one should project from a reference language such as Czech, where case is lexically marked. Figure 3 illustrates six scenarios encountered when projecting POS tags from English to a language such as French. The first two show straightforward 1-to-1 projections, which are encountered in roughly two-thirds of English words. Phrasal (1-to-N) alignments offer greater challenges, as typically only a subset of the aligned words accept the English tag. To distinguish these cases, we initially assign position-sensitive phrasal parts-of-speech via subscripting (e.g. Les/NNS lois/NNS ), and subsequently learn a probablistic mapping to core, non-phrasal parts of speech (e.g.",
"cite_spans": [],
"ref_spans": [
{
"start": 818,
"end": 826,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Part-of-speech Projection Issues",
"sec_num": "4.1"
},
{
"text": "P DT N NS !",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-speech Projection Issues",
"sec_num": "4.1"
},
{
"text": ") that is used along with tag sequence and lexical prior models to re-tag these phrasal POS projections. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-speech Projection Issues",
"sec_num": "4.1"
},
{
"text": "Even at the relatively low tagset granularity of English, direct projection of core POS tags onto French achieves only 76% accuracy using EGYPT's automatic word alignments (as shown in Table 1 ). Part of this deficiency is due to word-alignment error; when word alignments were manually corrected, direct projection core-tag accuracy increased to 85%. Also, standard bigram taggers trained on the automatically projected data achieve only modest success at generalization (86% when reapplied to the noisy training data). More highly lexicalized learning algorithms exhibit even greater potential for overmodeling the specific projection errors of this data.",
"cite_spans": [],
"ref_spans": [
{
"start": 185,
"end": 192,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Noise-robust POS Tagger Training",
"sec_num": "4.2"
},
{
"text": "Thus our research has focused on noise-robust techniques for distilling a conservative but effective tagger from this challenging raw projection data. In particular, we modify standard n-gram modeling to separate the training of the tag sequence model \" # % $ & from the lexical prior models \" # ( ' ) $ & , and apply different confidence weighting and signal amplification techniques to both. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noise-robust POS Tagger Training",
"sec_num": "4.2"
},
{
"text": "In contrast, the training of the tag sequence model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tag Sequence Model Estimation",
"sec_num": "4.2.2"
},
{
"text": "\" # % 1 P I C 1 P I R Q T S H U W V X V Y V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tag Sequence Model Estimation",
"sec_num": "4.2.2"
},
{
"text": "focuses on confidence weighting and filtering of projected training subsequences. The contribution of each candidate training sentence is weighted proportionally with both its EGYPT/GIZA sentencelevel alignment score and an agreement measure between the projected tags and the 1st iteration lexical priors, a rough measure of alignment reasonableness. Given the observed bursty distribution of alignment errors in the corpus, this downweighting of low-confidence alignment regions substantially improves sequence model quality with tolerable reduction in training volume.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tag Sequence Model Estimation",
"sec_num": "4.2.2"
},
{
"text": "As shown in Table 1 , performance is evaluated on two evaluation data sets, including an independent 200K-word hand-tagged French dataset provided by Universit\u00e9 de Montr\u00e9al, which is used to gauge stand-alone tagger performance. Signal amplification and noise reduction techniques yield a 71% error reduction, achieving a core tagset accuracy of 96%, closely approaching the upper-bound 97% performance of an equivalent bigram model trained directly on an 80% subset of the hand-tagged evaluation set (using 5-fold cross-validation). Thus robust training on 500K words of very noisy but automatically-derived tag projections can approach the performance obtained by fully supervised learning on 80K words of hand-tagged training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Evaluation of POS Tagger Induction",
"sec_num": "4.3"
},
{
"text": "Our empirical studies show that there is a very strong tendency for noun phrases to cohere as a unit when translated between languages, even when undergoing significant internal re-ordering. This strong noun-phrase cohesion even tends to hold for relatively free word order languages such as Czech, where both native speakers and parallel corpus data indicate that nominal modifiers tend to remain in the same contiguous chunk as the nouns they modify. This property allows collective word alignments to serve as a reliable basis for bracket projection as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NOUN PHRASE BRACKETER INDUCTION",
"sec_num": "5."
},
{
"text": "The projection process begins by automatically tagging and bracketing the English data, using Brill (1995) and Ramshaw & Marcus (1994) , respectively.",
"cite_spans": [
{
"start": 94,
"end": 106,
"text": "Brill (1995)",
"ref_id": "BIBREF1"
},
{
"start": 111,
"end": 134,
"text": "Ramshaw & Marcus (1994)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BaseNP Projection Methodology",
"sec_num": "5.1"
},
{
"text": "As illustrated in Figure 5 , each word within an English noun phrase is then subscripted with the number of its NP in the sentence, and this subscript is projected onto the aligned French (or Chinese) words. In the most common case, the corresponding French/Chinese noun phrase is simply the maximal span of the projected subscript. Figure 6 shows some of the projection challenges encountered. Nearly all such cases of interwoven projected NPs are due to alignment errors, and a strong inductive bias towards NP cohesion was utilized to resolve these incompatible projections. Figure 6 : Problematic NP projection scenarios.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 5",
"ref_id": null
},
{
"start": 333,
"end": 341,
"text": "Figure 6",
"ref_id": null
},
{
"start": 578,
"end": 586,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "BaseNP Projection Methodology",
"sec_num": "5.1"
},
{
"text": "J N VBD N N IN N [ ] [ ] [ DT N J VBD N de N DT N [ 1 1 2 2 3 (3) (2) (2) (1) (1) (1) ] ] ] [ ] O [ [ Figure 5: Standard NP projection scenarios. DT J N VBD N N [ ] 1 1 1 2 2 [ ] (1) (1) [ (2) } (1) { (2) ] [ DT N VBD N J de N ] O",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BaseNP Projection Methodology",
"sec_num": "5.1"
},
{
"text": "For stand-alone tool development, the Ramshaw & Marcus IOB bracketing framework and a fast transformation-based learning system (Ngai and Florian, 2001 ) were applied to the noisy baseNPprojected data described above.",
"cite_spans": [
{
"start": 128,
"end": 151,
"text": "(Ngai and Florian, 2001",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BaseNP Training Algorithm",
"sec_num": "5.2"
},
{
"text": "As with POS tagger induction, bracketer induction is improved by focusing training on the highest quality projected data and excluding regions with the strongest indications of word-alignment error. Thus sentences with the lowest 25% of model-3 alignment scores were excluded from training, as were sentences where projected bracketings overlapped and conflicted (also an indicator of alignment errors). Data with lower-confidence POS tagging were not filtered, however, as this filtering reduces robustness when the stand-alone bracketers are applied to noisy tagger output. Additional details are provided in Yarowsky and Ngai (2001) .",
"cite_spans": [
{
"start": 611,
"end": 635,
"text": "Yarowsky and Ngai (2001)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BaseNP Training Algorithm",
"sec_num": "5.2"
},
{
"text": "Current efforts to further improve the quality of the training data include use of iterative EM bootstrapping techniques. Separate projection of bracketings from aligned parallel data with a 3rd language also shows promise for providing independent supervision, which can further help distinguish consensus signal from noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BaseNP Training Algorithm",
"sec_num": "5.2"
},
{
"text": "Because no bracketed evaluation data were available to us for French or Chinese, a third party fluent in these languages handbracketed a small, held-out 40-sentence evaluation set in both languages, using a set of bracketing conventions that they felt were appropriate for the languages. Table 2 shows the performance relative to these evaluation sets, as measured by exact-match bracketing precision (Pr), recall (R) and F-measure (F). It is important to note, however, that many decisions regarding BaseNP bracketing conventions are essentially arbitrary, and agreement rates between additional human judges on these data were measured at 64% and 80% for French and Chinese respectively. Since the translingual projections are essentially unsupervised and have no data on which to mimic arbitrary conventions, it is also reasonable to evaluate the degree to which the induced bracketings are deemed acceptable and consistent with the arbitrary goldstandard (e.g. no crossing brackets). To this end, an additional pool of 3 judges were asked to further adjudicate the differences between the goldstandard and the projection output, annotating such situations as either acceptable/compatible or unacceptable/incompatible.",
"cite_spans": [],
"ref_spans": [
{
"start": 288,
"end": 295,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "BaseNP Projection Evaluation",
"sec_num": "5.3"
},
{
"text": "Overall, these translingual projection results are quite encouraging. For the Chinese, they are similar to Wu's 78% precision result for translingual-grammar-based NP bracketing, and especially promising given that no word segmentation (only raw characters) were used. For French, the increase from 59% to 91% F-measure for the stand-alone induced bracketer shows that the training algorithm is able to generalize successfully from the noisy raw projection data, distilling a reasonably accurate (and transferable) model of baseNP structure from this high degree of noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BaseNP Projection Evaluation",
"sec_num": "5.3"
},
{
"text": "Multilingual named entity tagger induction is based on the extended combination of the part-of-speech and noun-phrase bracketing frameworks. The entity class tags used for this study were FNAME, LNAME, PLACE and OTHER (other entities including organizations). They were derived from an anonymously donated MUC-6 named entity tagger applied to the English side of the French-English Canadian Hansards data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NAMED ENTITY TAGGER INDUCTION",
"sec_num": "6."
},
{
"text": "Initial classification proceeds on a per-word basis, using an aggressively smoothed transitive projection model similar to those de-scribed in Section 7. For a given second-language word FW and all English words a b ' c I aligned to it:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NAMED ENTITY TAGGER INDUCTION",
"sec_num": "6."
},
{
"text": "\" # NEclassd e F W g f i h I \" # NEclassd e E WI p \" EWI F W \" # PLACE C or\u00e9e q f r \" # PLACE K orea s \" T t Korea C or\u00e9e v u w V X V Y V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NAMED ENTITY TAGGER INDUCTION",
"sec_num": "6."
},
{
"text": "The co-training-based algorithm given in Cucerzan and Yarowsky (1999) was then used to train a stand-alone named entity tagger from the projected data. Seed words for this algorithm were those French words that were both POS-tagged as proper nouns and had an above-threshold entity-class confidence from the lexical projection models.",
"cite_spans": [
{
"start": 41,
"end": 69,
"text": "Cucerzan and Yarowsky (1999)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NAMED ENTITY TAGGER INDUCTION",
"sec_num": "6."
},
{
"text": "Performance was measured in terms of per-word entity-type classification accuracy on the French Hansard test data, using the 4class inventory listed above. Classification accuracy of raw tag projections was only 64% (based on automatic word alignment). In contrast, the stand-alone co-training-based tagger trained on the projections achieved 85% classification accuracy, illustrating its effectivess at generalization in the face of projection noise. Notably, most of its observed errors can be traced to entity classification errors from the original English tagger. In fact, when evaluated on the English translation of the French test data set, the English tagger only achieved 86% classification accuracy on this directly comparable data set. It appears that the projection-induced French tagger achieves performance nearly as high as its original training source. Thus further improvements should be expected from higher quality English training sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NAMED ENTITY TAGGER INDUCTION",
"sec_num": "6."
},
{
"text": "Bilingual corpora can also serve as a very successful bridge for aligning complex inflected word forms in a new language with their root forms, even when their surface similarity is quite different or highly irregular. Figure 7 , the association between a French verbal inflection (croyant) and its correct root (croire), rather than a similar competitor (cro\u00eetre), can be identified by a single-step transitive association via an English bridge word (believing). However, in the case of morphology induction, such direct associations are relatively rare given that inflections in a second language tend to associate with similar tenses in English while the singular/infinitive forms tend to associate with analogous singular/infinitive forms, and thus croyaient (believed) and its root croire have no direct English link in our aligned corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 219,
"end": 227,
"text": "Figure 7",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "MORPHOLOGICAL ANALYSIS INDUCTION",
"sec_num": "7."
},
{
"text": "However, Figure 2 (first page) illustrates that an existing investment in a lemmatizer for English can help bridge this gap by joining a multi-step transitive association croyaient 6 believed 6 believe 6 croire. Figure 8 ) with which either a candidate foreign inflection (i infl ) or its root (i root) exhibit an alignment in the parallel corpus:",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 17,
"text": "Figure 2",
"ref_id": null
},
{
"start": 212,
"end": 220,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "MORPHOLOGICAL ANALYSIS INDUCTION",
"sec_num": "7."
},
{
"text": "\" T e g j k i T l C m C m o n C i I A p ! q g f r h I \" i r l P m C m o n o a d f e g o p \" a d f e g H i I X p s q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MORPHOLOGICAL ANALYSIS INDUCTION",
"sec_num": "7."
},
{
"text": "For example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MORPHOLOGICAL ANALYSIS INDUCTION",
"sec_num": "7."
},
{
"text": "\" r e g j t croire croyaient q f \" T t c roire BELIEVE s \" T p BELIEVE c royaient P u \" croire THINK p \" THINK c royaient t u w V Y V Y V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MORPHOLOGICAL ANALYSIS INDUCTION",
"sec_num": "7."
},
{
"text": "This projection/bridge-based similarity measure \" mp i root i infl can be quite effective on its own, as shown in the MProj only entries in Table 3 (for multiple parallel corpora in 3 different languages), especially when restricted to the highest-confidence subset of the vocabulary (5.2% to 77.9% in these data) for which the association exceeds simple fixed probability and frequency thresholds. When estimated using a 1.2 million word subset of the French Hansards, for example, the MProj measure alone achives 98.5% precision on 32.7% of the inflected French verbs in the corpus (constituting 97.6% of the tokens in the corpus). Unlike traditional stringtransduction-based morphology induction methods where irregular verbs pose the greatest challenges, these typically high-frequency words are often the best modelled data in the vocabulary making these multilingual projection techniques a natural complement to existing models.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "MORPHOLOGICAL ANALYSIS INDUCTION",
"sec_num": "7."
},
{
"text": "The high precision on the MProj-covered subset also make these partial pairings effective training data for robust supervised algorithms that can generalize the string transformation behavior to the remaining uncovered vocabulary. While any supervised morphological analysis technique is possible here, we employ a trie-based modeling technique where the probability of a given stem-change (from the inventory observed in the MProj-paired training data) is modeled hierarchically using variable suffix context, as described in Yarowsky and Wicentowski (2000) : ",
"cite_spans": [
{
"start": 527,
"end": 558,
"text": "Yarowsky and Wicentowski (2000)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Trie-based Morphology Models",
"sec_num": "7.1"
},
{
"text": "\" # root inflection g f u \" # R v x w h v z y q g f r \" # R y 6 w h v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trie-based Morphology Models",
"sec_num": "7.1"
},
{
"text": "V Y V Y V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trie-based Morphology Models",
"sec_num": "7.1"
},
{
"text": "An important property of the trie-based models is their effectiveness at clustering words that exhibit similar morphological behavior, both reducing model size and facilitating generalization to previously unseen examples. This property is illustrated in Figure 9 , showing a sample (inflection 6 root) trie branch for French verbal inflections, with suffix histories ='oie', ='noie', ='roie', etc. At each history node, the hierarchically smoothed probabilities of several y 6 w (inflection 6 root) changes are given. Note that the relative probabilities of the competing analyses ie 6 ir and ie 6 yer differ substantially for diffent suffix histories, and that there are subexceptions that tend to cluster by affix history. This allows for the successful analysis of 8 of the 9 italicized test words that had not been seen in the bilingual projection data or where the MProj model yielded no root candidate above threshold. Table 3 illustrates the performance of a variety of morphology induction models. When using the projection-based MProj and trie-based MTrie models together (with the latter extending coverage to words that may not even appear in the parallel corpus), full verb lemmatization precision on the 1.2M word Hansard subset exceeds 99.5% (by type) and 99.9% (by token) with 95.8% coverage by type and 99.8% coverage by token. A backoff model based on Levenshtein-distance and distributional context similarity handles the relatively small percentage of cases where MProj and MTrie together are not sufficiently confident, bringing the system coverage to 100% coverage with a small drop in precision to 97.9% (by type) and 99.8% (by token) on the unrestricted space of inflected verbs observed in the full French Hansards. As shown in Section 7.3, performance is strongly correlated with size of the initial aligned bilingual corpus, with a larger Hansard subset of 12M words yielding 99.4% precision (by type) and 99.9% precision (by token). Performance on Czech is discussed in Section 7.3.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 263,
"text": "Figure 9",
"ref_id": "FIGREF6"
},
{
"start": 926,
"end": 933,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Trie-based Morphology Models",
"sec_num": "7.1"
},
{
"text": "Typ Tok Typ Tok",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision Coverage Model",
"sec_num": null
},
{
"text": "Even though at most one translation of the Bible is typically available in a given foreign language, numerous English Bible versions are freely available and a performance increase can be achieved by simultaneously utilizing alignments to each English version. As illustrated in Figure 10 , different aligned Bible pairs may exhibit (or be missing) different full or partial bridge links for a given word (due both to different lexical usage and poor textual parallelism in some text-regions or version pairs). However, ",
"cite_spans": [],
"ref_spans": [
{
"start": 279,
"end": 288,
"text": "Figure 10",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Boosting Performance via Multiple Parallel Translations",
"sec_num": "7.2.1"
},
{
"text": "Once lemmatization capabilities have been successfully projected to a new language (such as French), this language can then serve as an additional bridging source for morphology induction in a third language (such as Spanish), as illustrated in Figure 11 . This can be particularly effective if the two languages are very similar (as in Spanish-French) or if their available Bible versions are a close translation of a common source (e.g. the Latin Vulgate Bible). As shown in ",
"cite_spans": [],
"ref_spans": [
{
"start": 245,
"end": 254,
"text": "Figure 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "Boosting Performance via Multiple Bridge Languages",
"sec_num": "7.2.2"
},
{
"text": "This section includes additional detail regarding the morphology induction experiments, supplementing the previous details and analyses given in Section 7 and Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 159,
"end": 166,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Morphology Induction: Observations",
"sec_num": "7.3"
},
{
"text": "Performance induction using the French Bible as the bridge source is evaluated using the full test verb set extracted from the French Hansards. The strong performance when trained only using the Bible illustrates that even a small single text in a very different genre can provide effective transfer to modern (conversational) French. While the observed genre and topic-sensitive vocabulary differs substantially between the Bible and Hansards, the observed inventories of stem changes and suffixation actually have large overlap, as do the set of observed high-frequency irregular verbs. Thus the inventory of morphological phenomena seem to translate better across genre than do lexical choice and collocation models. Over 60% of errors are due to gaps in the candidate rootlists. Currently the candidate rootlists are derived automatically by applying the projected POS models and selecting any word with the probability of being an uninflected verb greater than a generous threshold and also ending in a canonical verb suffix. False positives are easily tolerated (less than 5% of errors are due to spurious non-root competitors), but with missing roots the algorithms are forced either to propose previously unseen roots or align to the closest previously observed root candidate. Thus while no non-English dictionary was used in the computation of these results, it would substantially improve performance to have a dictionary-based inventory of potential roots, increasing coverage and decreasing noise from competing non-roots and spelling errors. Performance in all languages has been significnatly hindered by low-accuracy parallel-corpus word-alignments using the original Model-3 GIZA tools. Use of Och and Ney's recently released and enhanced GIZA++ word-alignment models (Och and Ney, 2000) should improve performance for all of the applications studied in this paper, as would iterative realignments using richer alignment features (including lemma and part-of-speech) derived from this research. The current somewhat lower performance on Czech is due to several factors. They include (a) very low accuracy initial word-alignments due to often non-parallel translations of the Reader's Digest sample and the failure of the initial word-alignment models to handle the highly inflected Czech morphology. (b) the small size of the Czech parallel corpus (less than twice the length of the Bible). (c) the common occurrence in Czech of two very similar perfective and non-perfective root variants (e.g. odol\u00e1vat and odolat, both of which mean to resist). A simple monolingual dictionaryderived list of canonical roots would resolve ambiguity regarding which is the appropriate target. Many of the errors are due to all (or most) inflections of a single verb mapping to the same incorrect root. But for many applications where the function of lemmatization is to cluster equivalent words (e.g. stemming for information retrieval), the choice of label for the lemma is less important than correctly linking the members of the lemma.",
"cite_spans": [
{
"start": 1785,
"end": 1804,
"text": "(Och and Ney, 2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphology Induction: Observations",
"sec_num": "7.3"
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphology Induction: Observations",
"sec_num": "7.3"
},
{
"text": ".01",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "005",
"sec_num": null
},
{
"text": "Error Rate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "005",
"sec_num": null
},
{
"text": ".001",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "005",
"sec_num": null
},
{
"text": ".002",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "005",
"sec_num": null
},
{
"text": ".02",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "005",
"sec_num": null
},
{
"text": ". Figure 12 show the strong correlation between performance and size of the aligned corpus. Given that large quantities of parallel text currently exist in translation bureau archives and OCR-able books, not to mention the increasing online availability of bitext on the web, the natural growth of available bitext quantities should continue to support performance improvement.",
"cite_spans": [],
"ref_spans": [
{
"start": 2,
"end": 11,
"text": "Figure 12",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "005",
"sec_num": null
},
{
"text": "The system analysis examples shown in Table 4 are representative of model performance and are selected to illustrate the range of encountered phenomena. All system evaluation is based on the task of selecting the correct root for a given inflection (which has a long lexicography-based consensus regarding the \"truth\"). In contrast, the descriptive analysis of any such pairing is very theory dependent without standard consensus. The \"TopBridge\" column shows the strongest English bridge lemma utilized in mapping (typically one of many potential bridge lemmas).",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 45,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "005",
"sec_num": null
},
{
"text": "These results are quite impressive in that they are based on essentially no language-specific knowledge of French, Spanish or Czech. In addition, the multilingual bridge algorithm is surface-form independent, and can just as readily handle obscure infixational or reduplicative morphological processes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "005",
"sec_num": null
},
{
"text": "This paper has presented a detailed survey of original algorithms for cross-language annotation projection and noise-robust tagger induction, evaluated on four diverse applications. It shows how previous major investments in English annotated corpora and tool development can be effectively leveraged across languages, achieving accurate stand-alone tool development in other languages without comparable human annotation efforts. Collectively this work is the most comprehensive existing exploration of a very promising new paradigm for cross-language resource projection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "8."
},
{
"text": "Performance using even small parallel corpora (e.g. a 120K subset of the French Hansards) still yields a respectable 93.2% (type) and 98.9% (token) precision on the verb-lemmatization test set for the full Hansards. Given that the Bible is actually larger (approximately 300K words, depending on version and language) and available on-line or via OCR for virtually all languages(Resnik et al., 2000), we also conducted several experiments on Bible-based morphology induction, further detailed inTable 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research has been partially supported by NSF grant IIS-9985033 and ONR/MURI contract N00014-01-1-0685. The authors thank Silviu Cucerzan, Radu Florian, Jan Hajic, Gideon Mann and Charles Schafer for their valuable contributions and feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "French",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FRENCH Verbal Morphology Induction",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Statistical Machine Translation",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Curin",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Jahr",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Melamed",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Purdy",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Al-Onaizan, J. Curin, M. Jahr, K. Knight, J. Lafferty, D. Melamed, FJ Och, D. Purdy, N. Smith and D. Yarowsky. 1999. Statistical Machine Translation (tech report). Johns Hopkins University.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Transformation-based error-driven learning and natural language processing: A case study in part of speech tagging",
"authors": [
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "1",
"pages": "543--565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Brill. 1995. Transformation-based error-driven learning and natural language processing: A case study in part of speech tagging. Computational Linguistics, 24(1): 543-565.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A statistical approach to machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cocke",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Dellapietra",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dellapietra",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rossin",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "2",
"pages": "29--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Brown, J. Cocke, S. DellaPietra, V. DellaPietra, F. Jelinek, J. Lafferty, R. Mercer, and P. Rossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):29-85.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Language independent named entity recognition combining morphological and contextual evidence",
"authors": [
{
"first": "S",
"middle": [],
"last": "Cucerzan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings, 1999 Joint SIGDAT Conference on Empirical Methods in NLP and Very Large Corpora",
"volume": "",
"issue": "",
"pages": "90--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Cucerzan and D. Yarowsky. 1999. Language independent named entity recognition combining morphological and contextual evidence.\" In Proceedings, 1999 Joint SIGDAT Conference on Empirical Methods in NLP and Very Large Corpora, pp. 90-99.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "K-vec: a new approach for aligning parallel texts",
"authors": [
{
"first": "P",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of COLING-94",
"volume": "",
"issue": "",
"pages": "1096--1102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Fung and K. Church. 1994. K-vec: a new approach for aligning parallel texts. In Proceedings of COLING-94, pp. 1096-1102.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Aligning noisy parallel corpora across language groups: Word pair feature matching by dynamic warping",
"authors": [
{
"first": "P",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of AMTA-94",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Fung and K. McKeown. 1994. Aligning noisy parallel corpora across language groups: Word pair feature matching by dynamic warping. In Proceedings of AMTA-94, pp. 81-88.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Twisted pair grammar: Support for rapid development of machine translation for low density languages",
"authors": [
{
"first": "D",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Havrilla",
"suffix": ""
}
],
"year": 1998,
"venue": "Procs. of AMTA'98",
"volume": "",
"issue": "",
"pages": "318--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Jones, and R. Havrilla. 1998 Twisted pair grammar: Support for rapid development of machine translation for low density languages In Procs. of AMTA'98, pp. 318-332.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bitext maps and alignment via pattern recognition",
"authors": [
{
"first": "D",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "1",
"pages": "107--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Melamed. 1999. Bitext maps and alignment via pattern recognition. Computational Linguistics, 25(1):107-130.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Transformation-based learning in the fast lane",
"authors": [
{
"first": "G",
"middle": [],
"last": "Ngai",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of NAACL-2001",
"volume": "",
"issue": "",
"pages": "40--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Ngai and R. Florian. 2001. Transformation-based learning in the fast lane. In Proceedings of NAACL-2001, pp. 40-47.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of ACL-2000",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F.J. Och and H. Ney. 2000. Improved statistical alignment models. In Proceedings of ACL-2000, pp. 440-447.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Text chunking using transformation-based learning",
"authors": [
{
"first": "L",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1999,
"venue": "Natural Language Processing Using Very Large Corpora. Kluwer",
"volume": "",
"issue": "",
"pages": "157--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Ramshaw and M. Marcus, 1999. Text chunking using transformation-based learning. In Armstrong et al. (Eds.), Natural Language Processing Using Very Large Corpora. Kluwer, pp. 157-176.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The Bible as a parallel corpus: annotating the 'Book of 2000 Tongues",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Olsen",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2000,
"venue": "Computers and the Humanities",
"volume": "33",
"issue": "1-2",
"pages": "129--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Resnik, M. Olsen, and M. Diab. 2000. The Bible as a parallel corpus: annotating the 'Book of 2000 Tongues' Computers and the Humanities, 33(1-2):129-153.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An algorithm for simultaneously bracketing parallel texts",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1995,
"venue": "Proc. of ACL-95",
"volume": "",
"issue": "",
"pages": "244--251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Wu. 1995. An algorithm for simultaneously bracketing parallel texts. In Proc. of ACL-95, pp. 244-251.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Statistical inversion transduction grammars an bilingual parsing of parallel corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Wu. 1997. Statistical inversion transduction grammars an bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-404.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Inducing multilingual POS taggers and NP Bracketers via robust projection across aligned corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Ngai",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of NAACL-2001",
"volume": "",
"issue": "",
"pages": "377--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Yarowsky and G. Ngai. 2001. Inducing multilingual POS taggers and NP Bracketers via robust projection across aligned corpora. In Proceedings of NAACL-2001, pp. 377-404.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Minimally supervised morphological analysis by multimodal alignment",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wicentowski",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of ACL-2000",
"volume": "",
"issue": "",
"pages": "207--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Yarowsky and R. Wicentowski. 2000. Minimally supervised morphological analysis by multimodal alignment. In Proceedings of ACL-2000, pp. 207-216.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Projecting part-of-speech tags, named-entity tags and noun-phrase structure from English to Chinese and French. French morphological analysis via English"
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "French POS tag projection scenarios"
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Figure 4 illustrates the process of hierarchically smoothing the lexical prior model 0 \" # % 1 2 3 & . One motivating empirical observation is that words in French, English and Czech have a strong tendency to exhibit only a single core POS tag (e.g. 4 or 5 ), and very rarely have more than 2. In English, with relatively high \" # POS 3 & ambiguity, only 0.37% of the tokens in the Brown Corpus are not covered by a word type's two most frequent core tags, and in French the percentage of tokens is only 0.03%. Thus we employ an ag-"
},
"FIGREF4": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Hierarchical"
},
"FIGREF5": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Direct-bridge French inflection/root alignmentAs illustrated in"
},
"FIGREF6": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Example of a French MTrie branch, analyses on test data are given in italics."
},
"FIGREF7": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "need not be estimated from the same Bible pair. Even if one has only one Bible in a given source language, each alignment with a distinct English version gives new bridging opportunities with no additional resources needed on the source language side. The baseline approach (evaluated here) is simply to concatenate the different aligned versions together. While wordpair instances translated the same way in each version will be repeated, this rather reasonably reflects the increased confidence in this particular alignment. An alternate model would weight version pairs differently based on the otherwise-measured translation faithfulness and alignment quality between the version pairs. Doing so would help decrease noise. Increasing from 1 to 3 English versions reduces the type error rate (at full coverage) by 22% on French and 28% on Spanish with no increase in the source language resources. Use of multiple parallel Bible translations"
},
"FIGREF8": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Learning Curves for French MorphologyThe learning curves in"
},
"TABREF0": {
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"16\">National laws applying in Hong Kong [ [ ] ] JJ VBG IN NNP NNP NNS</td></tr><tr><td/><td/><td colspan=\"2\">0</td><td/><td/><td>1</td><td/><td>2</td><td/><td/><td>3</td><td/><td>4</td><td/><td>5</td></tr><tr><td/><td>0</td><td>[</td><td>1</td><td>2</td><td>]</td><td>3</td><td>4</td><td>5</td><td>[</td><td>6</td><td>7</td><td/><td>8</td><td>9</td><td>10</td><td>]</td></tr><tr><td/><td>IN</td><td colspan=\"6\">NNP NNP VBG VBG</td><td/><td colspan=\"2\">JJ</td><td>JJ</td><td/><td>JJ</td><td colspan=\"2\">NNS NNS</td></tr><tr><td/><td>In</td><td/><td>Hong</td><td/><td colspan=\"4\">implementing of</td><td/><td/><td colspan=\"2\">national</td><td/><td/><td>law(s)</td></tr><tr><td/><td/><td/><td>Kong</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>[</td><td colspan=\"9\">DT a significant producer 20 NN JJ 19 18</td><td>]</td><td>for IN 21</td><td>[</td><td colspan=\"3\">crude oil NN JJ 22 23</td><td>]</td></tr><tr><td colspan=\"16\">un producteur important de petrole brut 12 13 14 15 11 10 [ [ ]</td><td>]</td></tr><tr><td/><td/><td/><td colspan=\"2\">NN</td><td/><td/><td/><td>JJ</td><td/><td/><td/><td/><td/><td>NN</td><td>JJ</td></tr></table>",
"type_str": "table",
"text": ""
},
"TABREF2": {
"num": null,
"html": null,
"content": "<table><tr><td>4 illustrates the process of hierarchically smoothing the</td></tr><tr><td>lexical prior model is that words in French, English and Czech have a strong tendency 0 \" # % 1 2 3 . One motivating empirical observation &amp;</td></tr><tr><td>to exhibit only a single core POS tag (e.g. have more than 2. In English, with relatively high 4 or ), and very rarely 5 \" # POS 3 am-&amp; biguity, only 0.37% of the tokens in the Brown Corpus are not cov-</td></tr><tr><td>ered by a word type's two most frequent core tags, and in French</td></tr><tr><td>the percentage of tokens is only 0.03%. Thus we employ an ag-</td></tr></table>",
"type_str": "table",
"text": "re-estimation in favor of this bias, amplifying the model probability of the majority POS tag, and reducing or zeroing the model probability of 2nd or lower ranked core tags proportional to their relative frequency with respect to the majority tag. This process is then applied recursively, similarly amplifying the probability of the majority subtags within each core tag. Further details, including the handling of 1-to-N phrasal alignment projections, are given inYarowsky and Ngai (2001)."
},
"TABREF3": {
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">Exact Match</td><td colspan=\"3\">Acceptable Match</td></tr><tr><td>Method</td><td>Pr</td><td>R</td><td>F</td><td>Pr</td><td>R</td><td>F</td></tr><tr><td>Chinese:</td><td/><td/><td/><td/><td/><td/></tr><tr><td>Direct (auto)</td><td colspan=\"5\">.26 .58 .36 .48 .58</td><td>.51</td></tr><tr><td colspan=\"6\">Direct (hand) .47 .61 .53 .86 .86</td><td>.86</td></tr><tr><td>French:</td><td/><td/><td/><td/><td/><td/></tr><tr><td>Direct (auto)</td><td colspan=\"5\">.43 .48 .45 .60 .58</td><td>.59</td></tr><tr><td colspan=\"6\">Direct (hand) .56 .51 .53 .74 .70</td><td>.72</td></tr><tr><td>FTBL (auto)</td><td colspan=\"5\">.82 .81 .81 .91 .91</td><td>.91</td></tr></table>",
"type_str": "table",
"text": ""
},
"TABREF4": {
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"3\">believed</td><td>believing</td><td>believe</td><td/></tr><tr><td>x</td><td>y %</td><td>R</td><td/><td>x</td><td>y %</td><td>R</td></tr><tr><td colspan=\"2\">croyaient</td><td/><td colspan=\"2\">croissant</td><td/><td>croire</td><td>V</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">x y % X R Y R</td><td>French Roots</td></tr><tr><td colspan=\"7\">Figure 8: Multi-bridge French inflection/root alignment</td></tr><tr><td colspan=\"7\">lemmatization can be potentially utilized for all other English lem-</td></tr><tr><td colspan=\"7\">mas (such as THINK) with which croyaient and croire also asso-</td></tr><tr><td colspan=\"7\">ciate, offering greater potential coverage and robustness via multi-</td></tr><tr><td colspan=\"3\">ple bridges.</td><td/><td/><td/></tr><tr><td colspan=\"7\">Formally, these multiple transitive linkages can be modeled as</td></tr><tr><td colspan=\"7\">shown below, by summing over all English lemmas (a</td><td>d f e h g</td></tr><tr><td/><td/><td/><td/><td/><td/><td>illustrates how this transitive linkage via English</td></tr></table>",
"type_str": "table",
"text": ""
},
"TABREF6": {
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"2\">English Bridge Lemmas</td><td/><td colspan=\"4\">French Bridge Lemmas</td></tr><tr><td colspan=\"2\">BELIEVE</td><td/><td/><td/><td/><td>CROIRE</td></tr><tr><td>believed</td><td>believing</td><td>believe</td><td/><td/><td colspan=\"2\">croyaient</td><td>croire</td></tr><tr><td/><td/><td/><td/><td>%</td><td>R R</td><td>R</td></tr><tr><td>creyeron</td><td>creia</td><td/><td/><td/><td colspan=\"2\">creer</td><td>crear</td></tr><tr><td>Spanish Inflections</td><td/><td>% X</td><td>R R</td><td>Y</td><td>R</td><td>Spanish Roots</td></tr><tr><td>Figure 11:</td><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"text": ", using the previously analyzed French Bible as a bridge for Spanish achieves performance (97.4% precision) com-parable to the use of 3 parallel English Bible versions."
}
}
}
}