ACL-OCL / Base_JSON /prefixY /json /Y06 /Y06-1018.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y06-1018",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:34:11.282473Z"
},
"title": "Multi-feature Based Chinese-English Named Entity Extraction from Comparable Corpora",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Lu",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100080",
"settlement": "Beijing",
"country": "China {"
}
},
"email": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100080",
"settlement": "Beijing",
"country": "China {"
}
},
"email": "jzhao@nlpr.ia.ac.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Bilingual Named Entity Extraction is important to some cross language information processes such as machine translation (MT), cross-lingual information retrieval (CLIR), etc. A lot of previous work extracted bilingual Named Entities from parallel corpus. Here we propose a multifeature based method to extract bilingual Named Entities from comparable corpus. We first recognize the Chinese and English Named Entities respectively from the Chinese and English part of the comparable corpus. Then all the feature scores are calculated for every possible pair of Chinese and English Named Entities. At last we combine these feature scores together and decide which pairs are mutual translations. For translation score calculation, we didn't use the formula of IBM model 1 like previous approach. In stead, we used a modified edit distance to take the order of words into consideration. Experiment shows that the F-score of this method increased by 11 %. And with the multi-feature integration strategy encouraging results are obtained.",
"pdf_parse": {
"paper_id": "Y06-1018",
"_pdf_hash": "",
"abstract": [
{
"text": "Bilingual Named Entity Extraction is important to some cross language information processes such as machine translation (MT), cross-lingual information retrieval (CLIR), etc. A lot of previous work extracted bilingual Named Entities from parallel corpus. Here we propose a multifeature based method to extract bilingual Named Entities from comparable corpus. We first recognize the Chinese and English Named Entities respectively from the Chinese and English part of the comparable corpus. Then all the feature scores are calculated for every possible pair of Chinese and English Named Entities. At last we combine these feature scores together and decide which pairs are mutual translations. For translation score calculation, we didn't use the formula of IBM model 1 like previous approach. In stead, we used a modified edit distance to take the order of words into consideration. Experiment shows that the F-score of this method increased by 11 %. And with the multi-feature integration strategy encouraging results are obtained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named Entities (NE) such as person names, location names, organization names, etc, carry essential information in human language [1] . Examples of some base Named Entities are as follows:",
"cite_spans": [
{
"start": 129,
"end": 132,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Person name: \u80e1\u9526\u6d9b Hu jintao \u8fc8\u514b\u5c14\u2022\u6b27\u6587 Michael Owen Location name: \u6e25\u592a\u534e Ottawa \u534e\u76db\u987f Washington Organization name: \u6559\u80b2\u90e8 Ministry of Education \u56fd\u5bb6\u5b89\u5168\u59d4\u5458\u4f1a National Security Council Bilingual Named Entity Extraction is very important to many cross language information processes such as machine translation (MT), cross-lingual information retrieval (CLIR), etc. In machine translation systems we translate the NEs in the sentences according to a bilingual NE list. Thus the list should be constantly updated to ensure correct translation of new NEs, since new NEs appear on the Internet everyday. This paper is about how to extract bilingual NEs from Comparable Corpora, which can be easily obtained from the Internet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A lot of work has been done to get bilingual NEs from parallel corpora, aligning English NEs and Chinese NEs in parallel sentences [2] [3] . Although encouraging results have been obtained, the parallel corpus required in this approach is not easy to obtain. The construction of the parallel corpus costs huge human work and a long time.",
"cite_spans": [
{
"start": 131,
"end": 134,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 135,
"end": 138,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Compared with parallel corpus, comparable corpus can be obtained much more easily. Comparable corpus is one that contains non-sentence-aligned, non-translated bilingual documents that are topicaligned. For example, newspaper articles from two sources in different languages, within the same window of published dates, can constitute a comparable corpus [4] .",
"cite_spans": [
{
"start": 353,
"end": 356,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, the bilingual NE extraction from comparable corpus is much more difficult than that from the parallel corpus. In parallel corpus, given a source NE to be translated in a source sentence, the translation candidates are only the NEs in the corresponding target sentence. But in comparable corpus the possible translations for a Chinese NE are NEs of the same type in the whole target language corpus. The search space is much larger than that in the parallel corpus, which largely increases the difficulty of extraction. In our work, we propose a multi-feature integration strategy to try to solve this problem. Multi-features including phonetic feature, semantic feature, length feature and context feature can gather more similarity in various aspects to find the correct NE pairs from large amount of candidates. Experimental results show that all these features are very useful for the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows: In section 2, we discuss related work on bilingual NE extraction. Section 3 introduces the approach we use, including the features we selected. Experimental results and analysis are shown in section 4. At last, conclusion is given in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most previous methods of extracting bilingual NEs are based on parallel corpora. [2] [3] integrated multi-features including transliteration feature, translation feature and other features in parallel corpus to align Named Entities in parallel sentences. In the process, NEs are first recognized from the source language and target language respectively. Then, NEs in each pair of parallel sentences are aligned according to their features. [5] extracted formulation and transformation rules of multilingual named entities from multilingual named entity corpora and used them to CLIR. However, the corpus they used is not easy to obtain, while comparable corpus we used is much more accessible.",
"cite_spans": [
{
"start": 81,
"end": 84,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 441,
"end": 444,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "[6] adopted the context co-occurrence relations to extract bilingual lexicon in non-parallel corpora which was one of some early researches on non-parallel corpora. This approach is based on the assumption that if two words in different languages have similar contexts, it is most likely that they are mutual translations. However, it was not very effective with an accuracy of 30% when only the top one candidate was considered. [7] integrated this method and transliteration to extract keywords in comparable corpus. This task is similar to ours. However, it only considered two features while ours combine more.",
"cite_spans": [
{
"start": 430,
"end": 433,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Person names and most location names in source language are similar in their pronunciations with those in the target language. Transliteration feature is used to characterize this property. Transliteration is the process of replacing words in the source language with their approximate phonetic or spelling equivalents in the target language [8] . Most previous approaches resorted to phoneme similarity, where a pronunciation lexicon is needed. Fei Huang [3] constructed a transliteration model on the surface level which didn't need the pronunciation. This method uses pinyin as intermediate, which is the Romanized representation of Chinese characters. It had two levels of transliteration, Chinese character to pinyin syllable and pinyin syllable to English letter string. Our transliteration probability from English letters to pinyin letters is trained based on this thought. We view pinyin as the source language and English as the target language, and train the probability from English letters to pinyin letters using the IBM Model 4 on a LDC bilingual person name list.",
"cite_spans": [
{
"start": 342,
"end": 345,
"text": "[8]",
"ref_id": "BIBREF7"
},
{
"start": 456,
"end": 459,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration Score",
"sec_num": "3.1.1"
},
{
"text": "To measure the similarity between a pinyin string and an English string, we compute the edit distance between them. The standard cost function in edit distance is 1 or 0. We use transliteration probability as the cost function to acquire the phonetic distance. The cost function is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration Score",
"sec_num": "3.1.1"
},
{
"text": "( ) 2 / ) | ( ) | ( 1 i j j i r e p e r p + \u2212 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration Score",
"sec_num": "3.1.1"
},
{
"text": "where i r and j e are the th i pinyin letter and the th j English letter respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration Score",
"sec_num": "3.1.1"
},
{
"text": "We normalize the edit distance according to the string length and convert it to the similarity between pinyin string and English string. So the transliteration score is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration Score",
"sec_num": "3.1.1"
},
{
"text": ") exp( ) , ( arg er l e c trasli L d ne ne Score \u2212 = (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration Score",
"sec_num": "3.1.1"
},
{
"text": "where d is the edit distance and er l L arg is the length of longer string.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration Score",
"sec_num": "3.1.1"
},
{
"text": "Some Named Entities contain not only phonetic similarity but also semantic similarity. For example, in the location name \"\u5c0f\u843d\u77f6\u5c71\u8109\" and \"Little Rocky Mountains\", \"\u843d\u77f6\" and \"Rocky\" are phonetically similar, while \"\u5c0f\" and \"\u5c71\u8109\" are the semantic translations of \"Little\" and \"Mountains\" respectively. For most Chinese organization names and their corresponding English ones, the words composing them are mutual translations, which is shown in table 1. Thus it is needed to consider the translation probability between words in English NE and words in Chinese NE to reckon in the semantic similarity. Table 1 . Example of organization names \"\u6b27\u6d32\u59d4\u5458\u4f1a\" \" European Commission\" \"\u6d77\u5173\u603b\u7f72\" \"General Administration of Customs\" \"\u6559\u80b2\u90e8\" \"Education Ministry\" \"\u56fd\u5bb6\u5b89\u5168\u59d4\u5458\u4f1a\" \"National Security Council\"",
"cite_spans": [],
"ref_spans": [
{
"start": 592,
"end": 599,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Translation Score",
"sec_num": "3.1.2"
},
{
"text": "Previous work extracting bilingual NEs from parallel corpus defined the translation score using the IBM model 1 or a similar formula. They didn't consider the order of words in NEs. In fact the regulation of word order for NEs is relatively simple. Consider the organization names. Some Chinese organization names have the same order with their English names, while others have the opposite order when there is a word \"of\" in the middle of the English organization names. So we can use a modified edit distance to evaluate the similarity between Chinese NEs and English NEs. When the English NE has a word \"of\" on the center, we swap the words on the left of \"of\" with those on the right. Our cost function is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Score",
"sec_num": "3.1.2"
},
{
"text": ") | ( 1 l j c e p \u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Score",
"sec_num": "3.1.2"
},
{
"text": ", where the translation probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Score",
"sec_num": "3.1.2"
},
{
"text": ") | ( l j c e p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Score",
"sec_num": "3.1.2"
},
{
"text": "can be estimated from a large parallel corpus using IBM-mode l [9] . Assume that the edit distance between two words is d and the larger word length is er l L arg , then our translation score is given below:",
"cite_spans": [
{
"start": 63,
"end": 66,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Score",
"sec_num": "3.1.2"
},
{
"text": ") / exp( ) , ( arg er l e c trans L d ne ne Score \u2212 = (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Score",
"sec_num": "3.1.2"
},
{
"text": "Experiment demonstrates that our translation score considering the word order performs better than previous methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Score",
"sec_num": "3.1.2"
},
{
"text": "Word length score represents the length relationship between the source NE and the target NE. Statistics on a bilingual organization name list with 25,380 pairs shows that 74% of organization name pairs have length measure more than 0.7. Here length measure refers to the ratio of the shorter NE's length to its translation's length. For the NEs which need transliteration like person names, we first transform them into pinyin, and then compare the pinyin strings with their corresponding English NEs. Statistics on a person name list ( 672,638 pairs) indicates that pairs with length measure larger than 0.7 occupy 85.6% in all person name pairs. So it is reasonable to assume that a NE and its translation should be comparative in length, except for the abbreviation. The word length score is defined as: For organization names, the length refers to the number of the words composing them. And for location names and person names the length is the number of characters in the English NE or pinyin string converted from the Chinese NE. The more comparative the length of the two NEs in different language is, the higher the score is.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Length Score",
"sec_num": "3.1.3"
},
{
"text": "Pascale Fung [6] used context information to extract bilingual lexicon. It assumes that the words in the source and target language are likely to be mutual translations if their context is similar. Based on the assumption, the standard approach builds a context vector respectively for the source and target word. Then the context vector of the source word is translated to the target language, so that we can compare the source context vector with the target context vector and a similarity between them is also calculated. The detail algorithm is as follows:",
"cite_spans": [
{
"start": 13,
"end": 16,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context Score",
"sec_num": "3.1.4"
},
{
"text": "a. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Score",
"sec_num": "3.1.4"
},
{
"text": "The process of bilingual NE extraction is like this: first we recognize the Chinese Named Entities in the Chinese corpus with the Chinese NE tagger NLPRCSegTagNer, and recognize the English Named Entities in the English part using GATE. A NE pair can be composed by any two NEs coming respectively from the Chinese NEs and English NEs we've recognized. Then for each NE pair we decide whether they are mutual translations according to their feature scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entities Recognition",
"sec_num": "3.2.1"
},
{
"text": "A weighted sum of the scores is given below by combining all the features mentioned above. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Feature Integration",
"sec_num": "3.2.2"
},
{
"text": "We take Chinese and English news stories in the same period downloaded from the Internet as our corpus. The Chinese part of the corpus contains the news published in 2005 from the Chinese version of People's Daily and Sina network. The English part includes news report in the same year from the English version of People's Daily and Chinadaily. Since the Chinese part and the English part of the corpus share many identical topics, and they are not mutual translations, this corpus belongs to the comparable corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "The size of the Chinese corpus is about 469M with about 63,000,000 words, and the size of the English corpus is about 264M with about 28,000,000 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "We first segmented the Chinese text and tokenize the English text, and then tagged them respectively. After that, NEs were recognized respectively from the Chinese and English part of the corpus. We obtained 98,107 Chinese Person names, 55,591 English Person names, 26,167 Chinese Location names, 51,166 English Location names, 63,010 Chinese Organization names and 13,300 Organization names. So many NEs were acquired that the evaluation became difficult since we can not manually count how many NE pairs are mutual translations. In order to reduce the size of test data and facilitate our test, we selected the Chinese NEs which occurred at least 10 times and had a translation with frequency larger than 10 in the English corpus as our test set. We call these words Chinese source words. Then we selected the English NEs with frequency above 10 in the English corpus as the English candidate words. The answer set is the NE pairs we can find in the LDC Named Entity Dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Set",
"sec_num": null
},
{
"text": "As person names, location names and organization names have different characters, they are processed respectively with different weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Method",
"sec_num": null
},
{
"text": "We first calculate all feature scores of each possible NE pair in the test set, and then using those scores we calculate a total score using the formula (7) . After that, M NE pairs with the highest scores were selected out. At last, for every Chinese NE, N NE pairs are chosen from the M pairs as results of our system, that is, these NE pairs are regarded as mutual translations. Assume that our system selects ",
"cite_spans": [
{
"start": 153,
"end": 156,
"text": "(7)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Method",
"sec_num": null
},
{
"text": "Three sets of experiments are carried out to investigate the performance of the multi-feature integration method. Firstly we compare our proposed method with previous method of calculating translation scores. Secondly, we test how each feature influences the system's performance. At last, the effect of varying the parameter M is investigated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4.2"
},
{
"text": "Previous work used IBM model 1 or similar formula to calculate the translation score. These methods didn't take order of words into account. We use a modified edit distance to consider both the semantic similarity and the word order and acquired a much better result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Different Methods Adopted in Translation Score Calculation",
"sec_num": "4.2.1"
},
{
"text": "In table 2 ",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Comparing Different Methods Adopted in Translation Score Calculation",
"sec_num": "4.2.1"
},
{
"text": "where j c , i e is the th j Chinese word and the th i English word respectively, m and n is the length of Chinese NE and English NE respectively. The result indicates that modified edit distance method greatly outperforms Donghui Feng's method [2] on translation score calculation. It leads to about 11% increase in F-score.",
"cite_spans": [
{
"start": 244,
"end": 247,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Different Methods Adopted in Translation Score Calculation",
"sec_num": "4.2.1"
},
{
"text": "We also tried Fei Huang's approach [3] , which use formula of IBM model 1 as the translation score. However it worked worse since multiplication in it and sparse data made almost the whole score is zero.",
"cite_spans": [
{
"start": 35,
"end": 38,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Different Methods Adopted in Translation Score Calculation",
"sec_num": "4.2.1"
},
{
"text": "In order to investigate the influence by each feature, we add them one by one to the system, and view the change of performance. Table 3 shows the precision/recall/F-score using different feature sets, where the parameter is M = 10000, N = 1. It can be seen that by adding more information, both precision and recall are improved. So every feature is useful. Especially transliteration score and context score is more effective than other feature scores for person name and location name, while translation score and context score lead to more improvement for organization name.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Influence of Each Feature",
"sec_num": "4.2.2"
},
{
"text": "We also investigate the effect of varying M. The results are shown in Table 4 . One can see that when M increases, the precision becomes lower while the recall becomes higher. ",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Influence of Parameters",
"sec_num": "4.2.3"
},
{
"text": "We propose a multi-feature based method to extract bilingual NE pairs from comparable corpus, which is harder than extracting them from parallel corpus like a lot of previous work. When calculating the translation score, a modified edit distance method is used, which is proved more effective than previous method. Experiment on one year's news comparable corpus shows that our multi-feature method gets encouraging results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "FASTUS: A Cascaded Finite-State Transducer for Extracting Information from Natural Language Text",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hobbs",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hobbs, J. et al. 1996. FASTUS: A Cascaded Finite-State Transducer for Extracting Information from Natural Language Text, MIT Press. Cambridge, MA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A New Approach for English-Chinese Named Entity Alignment",
"authors": [
{
"first": "Dong-Hui",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Ya-Juan",
"middle": [],
"last": "Lv",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2004,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong-Hui Feng, Ya-Juan Lv, Ming Zhou,\"A New Approach for English-Chinese Named Entity Alignment\", 2004 Conference on Empirical Methods in Natural Language Processing, Barcelona, Spain, Jul. 2004.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic Extraction of Named Entity Translingual Equivalence Based on Multi-feature Cost Minimization",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2003,
"venue": "the Proceedings of the 2003 Annual Conference of the Association for Computational Linguistics (ACL'03)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Huang, Stephan Vogel and Alex Waibel, Automatic Extraction of Named Entity Translingual Equivalence Based on Multi-feature Cost Minimization, in the Proceedings of the 2003 Annual Conference of the Association for Computational Linguistics (ACL'03), Workshop on Multilingual and Mixed-language Named Entity Recognition, July, 2003",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Mining Very-Non-Parallel Corpora: Parallel Sentence and Lexicon Extraction via Bootstrapping and EM",
"authors": [
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Cheung",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP 2004",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascale Fung and Percy Cheung, Mining Very-Non-Parallel Corpora: Parallel Sentence and Lexicon Extraction via Bootstrapping and EM, In Proceedings of EMNLP 2004, Barcelona, Spain: July 2004.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning Formulation and Transformation Rules for Multilingual Named Entities",
"authors": [
{
"first": "Changhua",
"middle": [],
"last": "Hsin-Hsi Chen",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL 2003 Workshop on Multilingual and Mixed-language Named Entity Recognition: Combining Statistical and Symbolic Models",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hsin-Hsi Chen, Changhua Yang and Ying Lin (2003). \"Learning Formulation and Transformation Rules for Multilingual Named Entities.\" Proceedings of ACL 2003 Workshop on Multilingual and Mixed-language Named Entity Recognition: Combining Statistical and Symbolic Models, July 12, Sapporo, Japan, 2003, 1-8.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Statistical View on Bilingual Lexicon Extraction: from Parallel Corpora to Non-parallel Corpora",
"authors": [
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 1998,
"venue": "Lecture Notes in Artificial Intelligence",
"volume": "1529",
"issue": "",
"pages": "1--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascale Fung, \"Statistical View on Bilingual Lexicon Extraction: from Parallel Corpora to Non-parallel Corpora\". In Lecture Notes in Artificial Intelligence, Springer Publisher, 1998, vol 1529, 1-17. Invited speech, AMTA 98.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Mining New Word Translations from Comparable Corpora",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Shao, Hwee Tou Ng (2004). \"Mining New Word Translations from Comparable Corpora.\" COLING 2004.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Named Entity Translation: Extended Abstract",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of Human Language Technology",
"volume": "",
"issue": "",
"pages": "111--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Al-Onaizan and K. Knight. 2002. Named Entity Translation: Extended Abstract. In Proceedings of Human Language Technology 2002, pp. 111-115, San Diego, CA, March, 2002.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The mathematics of Machine Translation: Parameter Estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, S. A. Della Pietra, V. J. Della Pietra and R.L. Mercer. The mathematics of Machine Translation: Parameter Estimation. In Computational Linguistics, vol 19, number 2. pp263-311, June, 1993.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "of the shorter NE in a pair, and er l L arg is the length of the longer one.",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "pairs of NE as the final results, among which 2 R ( R \u2264 ) is real mutual translations, and the test set actually contains pairs of NE. So our precision is",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "our method and method in[2] are compared on the organization name test set. The translation score in[2] is defined as:",
"num": null,
"uris": null
},
"TABREF2": {
"text": "Comparing edit distance with previous method (The parameter is M = 10000, N = 1)",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>system</td><td>P</td><td>R</td><td>F</td></tr><tr><td>previous</td><td>33.1%</td><td>25.6%</td><td>28.9%</td></tr><tr><td>edit distance</td><td>45.0%</td><td>36.2%</td><td>40.1%</td></tr></table>"
},
"TABREF3": {
"text": "Performance of the system when different feature sets are used",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>features selected</td><td>P</td><td>R</td><td>F</td></tr><tr><td>Person Name</td><td/><td/><td/></tr><tr><td>transliteration</td><td>53.0%</td><td colspan=\"2\">46.7% 49.6%</td></tr><tr><td>transliteration+context</td><td>60.3%</td><td colspan=\"2\">52.8% 56.3%</td></tr><tr><td>transliteration+context+length</td><td>61.7%</td><td colspan=\"2\">54.7% 58.0%</td></tr><tr><td>transliteration+context+length+translation</td><td>69.9%</td><td colspan=\"2\">62.2% 65.8%</td></tr><tr><td>Location Name</td><td/><td/><td/></tr><tr><td>transliteration</td><td>44.4%</td><td colspan=\"2\">33.0% 37.9%</td></tr><tr><td>transliteration+context</td><td>74.4%</td><td colspan=\"2\">55.5% 63.6%</td></tr><tr><td>transliteration+context+length</td><td>75.8%</td><td colspan=\"2\">57.0% 65.0%</td></tr><tr><td>transliteration+context+length+translation</td><td>80.0%</td><td colspan=\"2\">60.9% 69.2%</td></tr><tr><td>Organization Name</td><td/><td/><td/></tr><tr><td>transliteration</td><td>3.1%</td><td>2.5%</td><td>2.8%</td></tr><tr><td>transliteration+context</td><td>36.5%</td><td colspan=\"2\">29.2% 32.4%</td></tr><tr><td>transliteration+context+length</td><td>37.1%</td><td colspan=\"2\">29.7% 33.0%</td></tr><tr><td>transliteration+context+length+translation</td><td>66.3%</td><td colspan=\"2\">53.3% 59.1%</td></tr></table>"
},
"TABREF4": {
"text": "Effect when M is changed 0% 50.0% 62.9% 89.6% 48.2% 62.7% 74.2% 46.2% 57.0% 5000 72.2% 60.0% 65.5% 82.1% 59.7% 69.1% 66.2% 52.3% 58.4% 10000 69.9% 62.2% 65.8% 80.0% 60.9% 69.2% 66.3% 53.3% 59.1%",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>M</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td></tr><tr><td/><td/><td>Person Name</td><td/><td/><td>Location Name</td><td/><td/><td colspan=\"2\">Organization Name</td></tr><tr><td>1000</td><td>85.</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>"
}
}
}
}